From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88F82C432C0 for ; Thu, 21 Nov 2019 09:39:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 37D2B20715 for ; Thu, 21 Nov 2019 09:39:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 37D2B20715 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C76946B0300; Thu, 21 Nov 2019 04:39:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BFF936B0302; Thu, 21 Nov 2019 04:39:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC6DB6B0303; Thu, 21 Nov 2019 04:39:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0193.hostedemail.com [216.40.44.193]) by kanga.kvack.org (Postfix) with ESMTP id 905B56B0300 for ; Thu, 21 Nov 2019 04:39:46 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 2ED272C12 for ; Thu, 21 Nov 2019 09:39:46 +0000 (UTC) X-FDA: 76179787572.20.oil45_1afd543695119 X-HE-Tag: oil45_1afd543695119 X-Filterd-Recvd-Size: 11945 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Thu, 21 Nov 2019 09:39:45 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 5D6CCB12B; Thu, 21 Nov 2019 09:39:43 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id EADB11E47FC; Thu, 21 Nov 2019 10:39:41 +0100 (CET) Date: Thu, 21 Nov 2019 10:39:41 +0100 From: Jan Kara To: John Hubbard Cc: Andrew Morton , Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?iso-8859-1?Q?Bj=F6rn_T=F6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?iso-8859-1?B?Suly9G1l?= Glisse , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , bpf@vger.kernel.org, dri-devel@lists.freedesktop.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, linux-rdma@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-mm@kvack.org, LKML Subject: Re: [PATCH v7 17/24] mm/gup: track FOLL_PIN pages Message-ID: <20191121093941.GA18190@quack2.suse.cz> References: <20191121071354.456618-1-jhubbard@nvidia.com> <20191121071354.456618-18-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20191121071354.456618-18-jhubbard@nvidia.com> User-Agent: Mutt/1.10.1 (2018-07-13) Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 20-11-19 23:13:47, John Hubbard wrote: > Add tracking of pages that were pinned via FOLL_PIN. >=20 > As mentioned in the FOLL_PIN documentation, callers who effectively set > FOLL_PIN are required to ultimately free such pages via put_user_page()= . > The effect is similar to FOLL_GET, and may be thought of as "FOLL_GET > for DIO and/or RDMA use". >=20 > Pages that have been pinned via FOLL_PIN are identifiable via a > new function call: >=20 > bool page_dma_pinned(struct page *page); >=20 > What to do in response to encountering such a page, is left to later > patchsets. There is discussion about this in [1], [2], and [3]. >=20 > This also changes a BUG_ON(), to a WARN_ON(), in follow_page_mask(). >=20 > [1] Some slow progress on get_user_pages() (Apr 2, 2019): > https://lwn.net/Articles/784574/ > [2] DMA and get_user_pages() (LPC: Dec 12, 2018): > https://lwn.net/Articles/774411/ > [3] The trouble with get_user_pages() (Apr 30, 2018): > https://lwn.net/Articles/753027/ >=20 > Suggested-by: Jan Kara > Suggested-by: J=E9r=F4me Glisse > Signed-off-by: John Hubbard Thanks for the patch! We are mostly getting there. Some smaller comments below. > +/** > + * try_pin_compound_head() - mark a compound page as being used by > + * pin_user_pages*(). > + * > + * This is the FOLL_PIN counterpart to try_get_compound_head(). > + * > + * @page: pointer to page to be marked > + * @Return: true for success, false for failure > + */ > +__must_check bool try_pin_compound_head(struct page *page, int refs) > +{ > + page =3D try_get_compound_head(page, GUP_PIN_COUNTING_BIAS * refs); > + if (!page) > + return false; > + > + __update_proc_vmstat(page, NR_FOLL_PIN_REQUESTED, refs); > + return true; > +} > + > +#ifdef CONFIG_DEV_PAGEMAP_OPS > +static bool __put_devmap_managed_user_page(struct page *page) Probably call this __unpin_devmap_managed_user_page()? To match the later conversion of put_user_page() to unpin_user_page()? > +{ > + bool is_devmap =3D page_is_devmap_managed(page); > + > + if (is_devmap) { > + int count =3D page_ref_sub_return(page, GUP_PIN_COUNTING_BIAS); > + > + __update_proc_vmstat(page, NR_FOLL_PIN_RETURNED, 1); > + /* > + * devmap page refcounts are 1-based, rather than 0-based: if > + * refcount is 1, then the page is free and the refcount is > + * stable because nobody holds a reference on the page. > + */ > + if (count =3D=3D 1) > + free_devmap_managed_page(page); > + else if (!count) > + __put_page(page); > + } > + > + return is_devmap; > +} > +#else > +static bool __put_devmap_managed_user_page(struct page *page) > +{ > + return false; > +} > +#endif /* CONFIG_DEV_PAGEMAP_OPS */ > + > +/** > + * put_user_page() - release a dma-pinned page > + * @page: pointer to page to be released > + * > + * Pages that were pinned via pin_user_pages*() must be released via e= ither > + * put_user_page(), or one of the put_user_pages*() routines. This is = so that > + * such pages can be separately tracked and uniquely handled. In parti= cular, > + * interactions with RDMA and filesystems need special handling. > + */ > +void put_user_page(struct page *page) > +{ > + page =3D compound_head(page); > + > + /* > + * For devmap managed pages we need to catch refcount transition from > + * GUP_PIN_COUNTING_BIAS to 1, when refcount reach one it means the > + * page is free and we need to inform the device driver through > + * callback. See include/linux/memremap.h and HMM for details. > + */ > + if (__put_devmap_managed_user_page(page)) > + return; > + > + if (page_ref_sub_and_test(page, GUP_PIN_COUNTING_BIAS)) > + __put_page(page); > + > + __update_proc_vmstat(page, NR_FOLL_PIN_RETURNED, 1); > +} > +EXPORT_SYMBOL(put_user_page); > + > /** > * put_user_pages_dirty_lock() - release and optionally dirty gup-pinn= ed pages > * @pages: array of pages to be maybe marked dirty, and definitely re= leased. > @@ -237,10 +327,11 @@ static struct page *follow_page_pte(struct vm_are= a_struct *vma, > } > =20 > page =3D vm_normal_page(vma, address, pte); > - if (!page && pte_devmap(pte) && (flags & FOLL_GET)) { > + if (!page && pte_devmap(pte) && (flags & (FOLL_GET | FOLL_PIN))) { > /* > - * Only return device mapping pages in the FOLL_GET case since > - * they are only valid while holding the pgmap reference. > + * Only return device mapping pages in the FOLL_GET or FOLL_PIN > + * case since they are only valid while holding the pgmap > + * reference. > */ > *pgmap =3D get_dev_pagemap(pte_pfn(pte), *pgmap); > if (*pgmap) > @@ -283,6 +374,11 @@ static struct page *follow_page_pte(struct vm_area= _struct *vma, > page =3D ERR_PTR(-ENOMEM); > goto out; > } > + } else if (flags & FOLL_PIN) { > + if (unlikely(!try_pin_page(page))) { > + page =3D ERR_PTR(-ENOMEM); > + goto out; > + } Use grab_page() here? > } > if (flags & FOLL_TOUCH) { > if ((flags & FOLL_WRITE) && > @@ -1890,9 +2000,15 @@ static int gup_pte_range(pmd_t pmd, unsigned lon= g addr, unsigned long end, > VM_BUG_ON(!pfn_valid(pte_pfn(pte))); > page =3D pte_page(pte); > =20 > - head =3D try_get_compound_head(page, 1); > - if (!head) > - goto pte_unmap; > + if (flags & FOLL_PIN) { > + head =3D page; > + if (unlikely(!try_pin_page(head))) > + goto pte_unmap; > + } else { > + head =3D try_get_compound_head(page, 1); > + if (!head) > + goto pte_unmap; > + } Why don't you use grab_page() here? Also you seem to loose the head =3D compound_head(page) indirection here for the FOLL_PIN case? > =20 > if (unlikely(pte_val(pte) !=3D pte_val(*ptep))) { > put_page(head); > @@ -1946,12 +2062,20 @@ static int __gup_device_huge(unsigned long pfn,= unsigned long addr, > =20 > pgmap =3D get_dev_pagemap(pfn, pgmap); > if (unlikely(!pgmap)) { > - undo_dev_pagemap(nr, nr_start, pages); > + undo_dev_pagemap(nr, nr_start, flags, pages); > return 0; > } > SetPageReferenced(page); > pages[*nr] =3D page; > - get_page(page); > + > + if (flags & FOLL_PIN) { > + if (unlikely(!try_pin_page(page))) { > + undo_dev_pagemap(nr, nr_start, flags, pages); > + return 0; > + } > + } else > + get_page(page); > + Use grab_page() here? > (*nr)++; > pfn++; > } while (addr +=3D PAGE_SIZE, addr !=3D end); ... > @@ -2025,12 +2149,31 @@ static int __record_subpages(struct page *page,= unsigned long addr, > return nr; > } > =20 > -static void put_compound_head(struct page *page, int refs) > +static bool grab_compound_head(struct page *head, int refs, unsigned i= nt flags) > { > + if (flags & FOLL_PIN) { > + if (unlikely(!try_pin_compound_head(head, refs))) > + return false; > + } else { > + head =3D try_get_compound_head(head, refs); > + if (!head) > + return false; > + } > + > + return true; > +} > + > +static void put_compound_head(struct page *page, int refs, unsigned in= t flags) > +{ > + struct page *head =3D compound_head(page); > + > + if (flags & FOLL_PIN) > + refs *=3D GUP_PIN_COUNTING_BIAS; > + > /* Do a get_page() first, in case refs =3D=3D page->_refcount */ > - get_page(page); > - page_ref_sub(page, refs); > - put_page(page); > + get_page(head); > + page_ref_sub(head, refs); > + put_page(head); > } > =20 > #ifdef CONFIG_ARCH_HAS_HUGEPD > @@ -2064,14 +2207,13 @@ static int gup_hugepte(pte_t *ptep, unsigned lo= ng sz, unsigned long addr, > =20 > head =3D pte_page(pte); > page =3D head + ((addr & (sz-1)) >> PAGE_SHIFT); > - refs =3D __record_subpages(page, addr, end, pages + *nr); > + refs =3D record_subpages(page, addr, end, pages + *nr); > =20 > - head =3D try_get_compound_head(head, refs); > - if (!head) > + if (!grab_compound_head(head, refs, flags)) Are you sure this is correct? Historically we seem to have always had log= ic like: head =3D compound_head(pte_page / pmd_page / ... (orig)) in this code. And you removed this now. Looking at the code I'm not sure whether the compound_head() indirection is really needed or not. We seem = to have already huge page head in the page table but maybe there's some subt= le case I'm missing. So I'd be calmer if we left the head=3Dcompound_head(..= .) in the code but if you really want to remove it, I'd like to see Ack from someone actually familiar with huge pages - e.g. Kirill Shutemov... And even if we find out that compound_head() indirection isn't really needed, that is big enough change in the logic that it would deserve to b= e done in a separate patch (if only for debugging by bisection purposes). > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 13cc93785006..981a9ea0b83f 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c ... > @@ -5034,8 +5052,20 @@ follow_huge_pmd(struct mm_struct *mm, unsigned l= ong address, > pte =3D huge_ptep_get((pte_t *)pmd); > if (pte_present(pte)) { > page =3D pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT); > + > if (flags & FOLL_GET) > get_page(page); > + else if (flags & FOLL_PIN) { > + /* > + * try_pin_page() is not actually expected to fail > + * here because we hold the ptl. > + */ > + if (unlikely(!try_pin_page(page))) { > + WARN_ON_ONCE(1); > + page =3D NULL; > + goto out; > + } > + } Use grab_page() here? Honza --=20 Jan Kara SUSE Labs, CR