From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB340C4338F for ; Wed, 28 Jul 2021 20:24:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4090F61019 for ; Wed, 28 Jul 2021 20:24:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4090F61019 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 9829C6B0033; Wed, 28 Jul 2021 16:24:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9330A8D0001; Wed, 28 Jul 2021 16:24:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 821726B005D; Wed, 28 Jul 2021 16:24:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0042.hostedemail.com [216.40.44.42]) by kanga.kvack.org (Postfix) with ESMTP id 66E826B0033 for ; Wed, 28 Jul 2021 16:24:11 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E3B5F8249980 for ; Wed, 28 Jul 2021 20:24:10 +0000 (UTC) X-FDA: 78413123460.40.B7AFAD7 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf13.hostedemail.com (Postfix) with ESMTP id 37A9510187D4 for ; Wed, 28 Jul 2021 20:24:10 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id e21so4141979pla.5 for ; Wed, 28 Jul 2021 13:24:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=9pcrUQOK0ZmqOqTRwcQZ4Fm0nSq0tiCm1QRBjNm3tro=; b=J+S4QQP/PyH79HmD9fIAGVXiFETZUXfvuFzfIp+BmNK7D9Gz1YlfqXCvWveO6cXfqp T3DOmF9hblZFQRw9D+domCU2uFza8nMw15Se0rLm94k4thE85t/EpykZn5gWZKE5KPOG nAetGMaqTpxgNDE/KWz0L5ltBIr2jKLYrmkM82A3Syi/RNJWnE2No4WEG2WjR8H4LuVN YDFHveJA1jDKIoM0Mtt96fZ4aoEIt9AAKMXe4voimu6wi+1I4vteRiNg3DmNTlb6DSYC wHgPbRiwqeZruwxEiCyca1v+9XtCg/L2J4WwOLX5ZtTob8pJnUhJnbdJTVCOndHehmCg 7tzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=9pcrUQOK0ZmqOqTRwcQZ4Fm0nSq0tiCm1QRBjNm3tro=; b=WWpT5bj3VKw8Ag5uTvzlh2IOWO/gzDJWExg1d6X+ZonWTfT7+yuz+iyqtbr2Z1oYf0 8YuS0vwfywLn3ezILbAZRhoGnDA+xhyeuwFsBGR/NM0WcE15hmPFbkTAYIL4KN0GE+j4 sz5eTWBGZuxpTlbmkd7UZT3y8OpsNX5iBQbUTWVd3x3AHt3CdryzdEkYTul1cIjesyyc QHU9bBDsnst7w8yz8saut0aUHgUzC8brWtSKpTOhrje/kEeqYLODElukdBDr44lmZXVd 9/7iQEM1aLdkaju/y9yNv9M6UBl6IvOiQaLY2Vdv2aGuWMysBpGj/WmfZLaX5pN+kP0U 7ldQ== X-Gm-Message-State: AOAM532vcsvcnQDCqy9pzAEtMu426LtkRQc3NU0QCUHsRIsf4FQr6BKl dkca+Lq5ACfHN0PgDrEn5ICfVov67pCA0qKN9TXBGw== X-Google-Smtp-Source: ABdhPJw0kVYiruKkVSXPrE01BQRySnSH3Xo2ykgaqqFpjUi0JbTpRumeXHbnQMdCqZ0SVMXsqj2Xr20vdHXDeLWFxJg= X-Received: by 2002:a17:90b:3b47:: with SMTP id ot7mr10851017pjb.149.1627503849125; Wed, 28 Jul 2021 13:24:09 -0700 (PDT) MIME-Version: 1.0 References: <20210714193542.21857-1-joao.m.martins@oracle.com> <20210714193542.21857-14-joao.m.martins@oracle.com> <861f03ee-f8c8-cc89-3fc2-884c062fea11@oracle.com> In-Reply-To: <861f03ee-f8c8-cc89-3fc2-884c062fea11@oracle.com> From: Dan Williams Date: Wed, 28 Jul 2021 13:23:58 -0700 Message-ID: Subject: Re: [PATCH v3 13/14] mm/gup: grab head page refcount once for group of subpages To: Joao Martins Cc: Linux MM , Vishal Verma , Dave Jiang , Naoya Horiguchi , Matthew Wilcox , Jason Gunthorpe , John Hubbard , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Linux NVDIMM , Linux Doc Mailing List Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=intel-com.20150623.gappssmtp.com header.s=20150623 header.b="J+S4QQP/"; spf=none (imf13.hostedemail.com: domain of dan.j.williams@intel.com has no SPF policy when checking 209.85.214.174) smtp.mailfrom=dan.j.williams@intel.com; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=intel.com (policy=none) X-Rspamd-Server: rspam02 X-Stat-Signature: smr8biaao7z6wrpqhfpkstjkcfsf69sc X-Rspamd-Queue-Id: 37A9510187D4 X-HE-Tag: 1627503850-618159 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 28, 2021 at 1:08 PM Joao Martins wrote: > > > > On 7/28/21 8:55 PM, Dan Williams wrote: > > On Wed, Jul 14, 2021 at 12:36 PM Joao Martins wrote: > >> > >> Use try_grab_compound_head() for device-dax GUP when configured with a > >> compound pagemap. > >> > >> Rather than incrementing the refcount for each page, do one atomic > >> addition for all the pages to be pinned. > >> > >> Performance measured by gup_benchmark improves considerably > >> get_user_pages_fast() and pin_user_pages_fast() with NVDIMMs: > >> > >> $ gup_test -f /dev/dax1.0 -m 16384 -r 10 -S [-u,-a] -n 512 -w > >> (get_user_pages_fast 2M pages) ~59 ms -> ~6.1 ms > >> (pin_user_pages_fast 2M pages) ~87 ms -> ~6.2 ms > >> [altmap] > >> (get_user_pages_fast 2M pages) ~494 ms -> ~9 ms > >> (pin_user_pages_fast 2M pages) ~494 ms -> ~10 ms > >> > >> $ gup_test -f /dev/dax1.0 -m 129022 -r 10 -S [-u,-a] -n 512 -w > >> (get_user_pages_fast 2M pages) ~492 ms -> ~49 ms > >> (pin_user_pages_fast 2M pages) ~493 ms -> ~50 ms > >> [altmap with -m 127004] > >> (get_user_pages_fast 2M pages) ~3.91 sec -> ~70 ms > >> (pin_user_pages_fast 2M pages) ~3.97 sec -> ~74 ms > >> > >> Signed-off-by: Joao Martins > >> --- > >> mm/gup.c | 53 +++++++++++++++++++++++++++++++++-------------------- > >> 1 file changed, 33 insertions(+), 20 deletions(-) > >> > >> diff --git a/mm/gup.c b/mm/gup.c > >> index 42b8b1fa6521..9baaa1c0b7f3 100644 > >> --- a/mm/gup.c > >> +++ b/mm/gup.c > >> @@ -2234,31 +2234,55 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > >> } > >> #endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */ > >> > >> + > >> +static int record_subpages(struct page *page, unsigned long addr, > >> + unsigned long end, struct page **pages) > >> +{ > >> + int nr; > >> + > >> + for (nr = 0; addr != end; addr += PAGE_SIZE) > >> + pages[nr++] = page++; > >> + > >> + return nr; > >> +} > >> + > >> #if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE) > >> static int __gup_device_huge(unsigned long pfn, unsigned long addr, > >> unsigned long end, unsigned int flags, > >> struct page **pages, int *nr) > >> { > >> - int nr_start = *nr; > >> + int refs, nr_start = *nr; > >> struct dev_pagemap *pgmap = NULL; > >> > >> do { > >> - struct page *page = pfn_to_page(pfn); > >> + struct page *pinned_head, *head, *page = pfn_to_page(pfn); > >> + unsigned long next; > >> > >> pgmap = get_dev_pagemap(pfn, pgmap); > >> if (unlikely(!pgmap)) { > >> undo_dev_pagemap(nr, nr_start, flags, pages); > >> return 0; > >> } > >> - SetPageReferenced(page); > >> - pages[*nr] = page; > >> - if (unlikely(!try_grab_page(page, flags))) { > >> - undo_dev_pagemap(nr, nr_start, flags, pages); > >> + > >> + head = compound_head(page); > >> + /* @end is assumed to be limited at most one compound page */ > >> + next = PageCompound(head) ? end : addr + PAGE_SIZE; > > > > Please no ternary operator for this check, but otherwise this patch > > looks good to me. > > > OK. I take that you prefer this instead: > > unsigned long next = addr + PAGE_SIZE; > > [...] > > /* @end is assumed to be limited at most one compound page */ > if (PageCompound(head)) > next = end; Yup.