From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 960EAC43215 for ; Thu, 21 Nov 2019 07:14:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 497AB20872 for ; Thu, 21 Nov 2019 07:14:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="AABiJjmz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 497AB20872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0FE0F6B02C6; Thu, 21 Nov 2019 02:14:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 086356B02CB; Thu, 21 Nov 2019 02:14:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DAA456B02C9; Thu, 21 Nov 2019 02:14:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0240.hostedemail.com [216.40.44.240]) by kanga.kvack.org (Postfix) with ESMTP id B86AC6B02C5 for ; Thu, 21 Nov 2019 02:14:09 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 7F32F8249980 for ; Thu, 21 Nov 2019 07:14:09 +0000 (UTC) X-FDA: 76179420618.13.tree76_414d0d0c6194a X-HE-Tag: tree76_414d0d0c6194a X-Filterd-Recvd-Size: 9037 Received: from hqemgate16.nvidia.com (hqemgate16.nvidia.com [216.228.121.65]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Thu, 21 Nov 2019 07:14:08 +0000 (UTC) Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 20 Nov 2019 23:13:57 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 20 Nov 2019 23:13:56 -0800 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 20 Nov 2019 23:13:56 -0800 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 21 Nov 2019 07:13:55 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 21 Nov 2019 07:13:55 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Wed, 20 Nov 2019 23:13:55 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard , Christoph Hellwig , "Aneesh Kumar K . V" Subject: [PATCH v7 02/24] mm/gup: factor out duplicate code from four routines Date: Wed, 20 Nov 2019 23:13:32 -0800 Message-ID: <20191121071354.456618-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191121071354.456618-1-jhubbard@nvidia.com> References: <20191121071354.456618-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1574320437; bh=sKvmWb988RM84yP+egQhSiN41dObVjIOUVoMHE//spE=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=AABiJjmzLHly90Lc+tpSgWvQ2gkOLBSYLx+q1zdjNh8KcQhGmFmsHnn0Y5BGHpRJD lgauCr1HVm47H+VZMReB1beCsG5JEAaltjH32A42RpfeXBABxRH7jysaeIpaogLDkZ 8vVFkAkRSmHW4rTeElrnSY8BxawMss/QQfrXR6NVDNG3J/EiAwhHBmoa3hsNalR0R1 NEox93K3NNzsOwo/F3iWRgwraeHBU6qhr8MtkgZIHumtbLgVhvBfHBC2cOT7xMTeSJ FBTqNvemx2QSN1/9cmZo/T+p9xWcNyjvq/uK2CE2cqAq0m4K4xzqaaPXYYOE8SdXxC zlcdC2LZ1r8GQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are four locations in gup.c that have a fair amount of code duplication. This means that changing one requires making the same changes in four places, not to mention reading the same code four times, and wondering if there are subtle differences. Factor out the common code into static functions, thus reducing the overall line count and the code's complexity. Also, take the opportunity to slightly improve the efficiency of the error cases, by doing a mass subtraction of the refcount, surrounded by get_page()/put_page(). Also, further simplify (slightly), by waiting until the the successful end of each routine, to increment *nr. Reviewed-by: J=C3=A9r=C3=B4me Glisse Reviewed-by: Jan Kara Cc: Ira Weiny Cc: Christoph Hellwig Cc: Aneesh Kumar K.V Signed-off-by: John Hubbard --- mm/gup.c | 91 ++++++++++++++++++++++---------------------------------- 1 file changed, 36 insertions(+), 55 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 85caf76b3012..f3c7d6625817 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1969,6 +1969,25 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *p= udp, unsigned long addr, } #endif =20 +static int __record_subpages(struct page *page, unsigned long addr, + unsigned long end, struct page **pages) +{ + int nr; + + for (nr =3D 0; addr !=3D end; addr +=3D PAGE_SIZE) + pages[nr++] =3D page++; + + return nr; +} + +static void put_compound_head(struct page *page, int refs) +{ + /* Do a get_page() first, in case refs =3D=3D page->_refcount */ + get_page(page); + page_ref_sub(page, refs); + put_page(page); +} + #ifdef CONFIG_ARCH_HAS_HUGEPD static unsigned long hugepte_addr_end(unsigned long addr, unsigned long en= d, unsigned long sz) @@ -1998,32 +2017,20 @@ static int gup_hugepte(pte_t *ptep, unsigned long s= z, unsigned long addr, /* hugepages are never "special" */ VM_BUG_ON(!pfn_valid(pte_pfn(pte))); =20 - refs =3D 0; head =3D pte_page(pte); - page =3D head + ((addr & (sz-1)) >> PAGE_SHIFT); - do { - VM_BUG_ON(compound_head(page) !=3D head); - pages[*nr] =3D page; - (*nr)++; - page++; - refs++; - } while (addr +=3D PAGE_SIZE, addr !=3D end); + refs =3D __record_subpages(page, addr, end, pages + *nr); =20 head =3D try_get_compound_head(head, refs); - if (!head) { - *nr -=3D refs; + if (!head) return 0; - } =20 if (unlikely(pte_val(pte) !=3D pte_val(*ptep))) { - /* Could be optimized better */ - *nr -=3D refs; - while (refs--) - put_page(head); + put_compound_head(head, refs); return 0; } =20 + *nr +=3D refs; SetPageReferenced(head); return 1; } @@ -2071,28 +2078,19 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, un= signed long addr, pages, nr); } =20 - refs =3D 0; page =3D pmd_page(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - do { - pages[*nr] =3D page; - (*nr)++; - page++; - refs++; - } while (addr +=3D PAGE_SIZE, addr !=3D end); + refs =3D __record_subpages(page, addr, end, pages + *nr); =20 head =3D try_get_compound_head(pmd_page(orig), refs); - if (!head) { - *nr -=3D refs; + if (!head) return 0; - } =20 if (unlikely(pmd_val(orig) !=3D pmd_val(*pmdp))) { - *nr -=3D refs; - while (refs--) - put_page(head); + put_compound_head(head, refs); return 0; } =20 + *nr +=3D refs; SetPageReferenced(head); return 1; } @@ -2114,28 +2112,19 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, un= signed long addr, pages, nr); } =20 - refs =3D 0; page =3D pud_page(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - do { - pages[*nr] =3D page; - (*nr)++; - page++; - refs++; - } while (addr +=3D PAGE_SIZE, addr !=3D end); + refs =3D __record_subpages(page, addr, end, pages + *nr); =20 head =3D try_get_compound_head(pud_page(orig), refs); - if (!head) { - *nr -=3D refs; + if (!head) return 0; - } =20 if (unlikely(pud_val(orig) !=3D pud_val(*pudp))) { - *nr -=3D refs; - while (refs--) - put_page(head); + put_compound_head(head, refs); return 0; } =20 + *nr +=3D refs; SetPageReferenced(head); return 1; } @@ -2151,28 +2140,20 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, un= signed long addr, return 0; =20 BUILD_BUG_ON(pgd_devmap(orig)); - refs =3D 0; + page =3D pgd_page(orig) + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT); - do { - pages[*nr] =3D page; - (*nr)++; - page++; - refs++; - } while (addr +=3D PAGE_SIZE, addr !=3D end); + refs =3D __record_subpages(page, addr, end, pages + *nr); =20 head =3D try_get_compound_head(pgd_page(orig), refs); - if (!head) { - *nr -=3D refs; + if (!head) return 0; - } =20 if (unlikely(pgd_val(orig) !=3D pgd_val(*pgdp))) { - *nr -=3D refs; - while (refs--) - put_page(head); + put_compound_head(head, refs); return 0; } =20 + *nr +=3D refs; SetPageReferenced(head); return 1; } --=20 2.24.0