From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 073D5C47082 for ; Mon, 7 Jun 2021 20:22:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A9D3960FE9 for ; Mon, 7 Jun 2021 20:22:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A9D3960FE9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4D77B6B006C; Mon, 7 Jun 2021 16:22:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4AF4D6B0072; Mon, 7 Jun 2021 16:22:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 35B126B006C; Mon, 7 Jun 2021 16:22:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0244.hostedemail.com [216.40.44.244]) by kanga.kvack.org (Postfix) with ESMTP id 036656B006C for ; Mon, 7 Jun 2021 16:22:27 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 92CBAAF91 for ; Mon, 7 Jun 2021 20:22:27 +0000 (UTC) X-FDA: 78228050334.31.1F4A50A Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) by imf03.hostedemail.com (Postfix) with ESMTP id D07B4C0633E6 for ; Mon, 7 Jun 2021 20:21:52 +0000 (UTC) Received: by mail-pg1-f172.google.com with SMTP id o9so11897919pgd.2 for ; Mon, 07 Jun 2021 13:21:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=3zPHda5Eu8IgQ46+81n6D3OdDX3g9IIVYAByx6ZBnig=; b=z20QtLbfAJd7oHlum4Qs6HfAHi3umMyRcrkAF3SJsoIkRXJ9mrfpKO59P5AjLICobY Q4dZx4bspVgkU+9bUQ25UW3kYxqi6rpWJ/S2G29h1Ktmt5H9WIrkqaSotmXs+GS0WYy9 XN6zqMHuCOvyr3rr1LGZFtDgOsrMJSaRdsf2ky/jhrHk7VEI3Bh6xzF3IJgRLHMYUMA9 JMETTNbCj6Z2Iu2fS/p5CwQfNL4e2tKooD56ikL9cvoPGKG+s3FLUV0b626Pff0whYdY 76FMCxvXo7pVl9Gj9pdqkccbiTaz29W51LbK5OTPU6yQiThZ423ysUqpaE4fqGhnDgi5 zeNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=3zPHda5Eu8IgQ46+81n6D3OdDX3g9IIVYAByx6ZBnig=; b=itFOkbXBFry2p9S1MIAADD27WDEfzQEdPe6zLWs5YEYpZ9wjRKRwMnk3dKNBujrpHA eC8tQnIcNsOM945Mm2HMdnNtvq0zeY6S7ZrQwHvNkygTMFA7STTHP602YnbMpBZBSVW+ lhXRMa8L16/kt0VeciR1vATo7wPeNhJ/IP1uKA3x5/AwcYkP4q4K3sYKazRLtjiFokRT QTaHwlya9r/Ex1TE0q/PuR9Q4NAhtN25LXkU7HBPJnQvtn8jfBP7oYXQiYh6OKxeVuY5 xRm6MU8yqNyg8CoSGSvOmFlc635oh3P9S+KiG/+EiI1F/Ny5scyiwZ9fH38XKFBxdmMw wYzw== X-Gm-Message-State: AOAM531po8DvKCQn9oZJy6wF0uODDbHhnlac9Mly5XD7aPOER1PD7Fd3 7lkCQUrY4Fk2P1XpJXWv4L4ILgSugSDfwtTQUSWKOIL7Jvg= X-Google-Smtp-Source: ABdhPJyDlreH5gTdqnxqG9FPdoq4rz6XwMEzycuoWdAIo1/+PmsHE82a8Jp6K9jRevaG0bqKAfcd2W4fboXXweJPkVM= X-Received: by 2002:a63:4653:: with SMTP id v19mr18851786pgk.240.1623093740178; Mon, 07 Jun 2021 12:22:20 -0700 (PDT) MIME-Version: 1.0 References: <20210325230938.30752-1-joao.m.martins@oracle.com> <20210325230938.30752-12-joao.m.martins@oracle.com> In-Reply-To: From: Dan Williams Date: Mon, 7 Jun 2021 12:22:08 -0700 Message-ID: Subject: Re: [PATCH v1 11/11] mm/gup: grab head page refcount once for group of subpages To: Joao Martins Cc: Linux MM , Ira Weiny , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , nvdimm@lists.linux.dev Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D07B4C0633E6 X-Stat-Signature: 9wi6f79w7k5oxu9pua5jwuz1yi4ta8zt Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=intel-com.20150623.gappssmtp.com header.s=20150623 header.b=z20QtLbf; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=intel.com (policy=none); spf=none (imf03.hostedemail.com: domain of dan.j.williams@intel.com has no SPF policy when checking 209.85.215.172) smtp.mailfrom=dan.j.williams@intel.com X-HE-Tag: 1623097312-275693 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jun 7, 2021 at 8:22 AM Joao Martins wrote: > > On 6/2/21 2:05 AM, Dan Williams wrote: > > On Thu, Mar 25, 2021 at 4:10 PM Joao Martins wrote: > >> > >> Much like hugetlbfs or THPs, treat device pagemaps with > >> compound pages like the rest of GUP handling of compound pages. > >> > > > > How about: > > > > "Use try_grab_compound_head() for device-dax GUP when configured with > > a compound pagemap." > > > Yeap, a bit clearer indeed. > > >> Rather than incrementing the refcount every 4K, we record > >> all sub pages and increment by @refs amount *once*. > > > > "Rather than incrementing the refcount for each page, do one atomic > > addition for all the pages to be pinned." > > > ACK. > > >> > >> Performance measured by gup_benchmark improves considerably > >> get_user_pages_fast() and pin_user_pages_fast() with NVDIMMs: > >> > >> $ gup_test -f /dev/dax1.0 -m 16384 -r 10 -S [-u,-a] -n 512 -w > >> (get_user_pages_fast 2M pages) ~59 ms -> ~6.1 ms > >> (pin_user_pages_fast 2M pages) ~87 ms -> ~6.2 ms > >> [altmap] > >> (get_user_pages_fast 2M pages) ~494 ms -> ~9 ms > >> (pin_user_pages_fast 2M pages) ~494 ms -> ~10 ms > > > > Hmm what is altmap representing here? The altmap case does not support > > compound geometry, > > It does support compound geometry and so we use compound pages with altmap case. > What altmap doesn't support is the memory savings in the vmemmap that can be > done when using compound pages. That's what is represented here. Ah, I missed that detail, might be good to mention this in the Documentation/vm/ overview doc for this capability. > > > so this last test is comparing pinning this amount > > of memory without compound pages where the memmap is in PMEM to the > > speed *with* compound pages and the memmap in DRAM? > > > The test compares pinning this amount of memory with compound pages placed > in PMEM and in DRAM. It just exposes just how ineficient this can get if huge pages aren't > represented with compound pages. Got it. > > >> > >> $ gup_test -f /dev/dax1.0 -m 129022 -r 10 -S [-u,-a] -n 512 -w > >> (get_user_pages_fast 2M pages) ~492 ms -> ~49 ms > >> (pin_user_pages_fast 2M pages) ~493 ms -> ~50 ms > >> [altmap with -m 127004] > >> (get_user_pages_fast 2M pages) ~3.91 sec -> ~70 ms > >> (pin_user_pages_fast 2M pages) ~3.97 sec -> ~74 ms > >> > >> Signed-off-by: Joao Martins > >> --- > >> mm/gup.c | 52 ++++++++++++++++++++++++++++++++-------------------- > >> 1 file changed, 32 insertions(+), 20 deletions(-) > >> > >> diff --git a/mm/gup.c b/mm/gup.c > >> index b3e647c8b7ee..514f12157a0f 100644 > >> --- a/mm/gup.c > >> +++ b/mm/gup.c > >> @@ -2159,31 +2159,54 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > >> } > >> #endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */ > >> > >> + > >> +static int record_subpages(struct page *page, unsigned long addr, > >> + unsigned long end, struct page **pages) > >> +{ > >> + int nr; > >> + > >> + for (nr = 0; addr != end; addr += PAGE_SIZE) > >> + pages[nr++] = page++; > >> + > >> + return nr; > >> +} > >> + > >> #if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE) > >> static int __gup_device_huge(unsigned long pfn, unsigned long addr, > >> unsigned long end, unsigned int flags, > >> struct page **pages, int *nr) > >> { > >> - int nr_start = *nr; > >> + int refs, nr_start = *nr; > >> struct dev_pagemap *pgmap = NULL; > >> > >> do { > >> - struct page *page = pfn_to_page(pfn); > >> + struct page *head, *page = pfn_to_page(pfn); > >> + unsigned long next; > >> > >> pgmap = get_dev_pagemap(pfn, pgmap); > >> if (unlikely(!pgmap)) { > >> undo_dev_pagemap(nr, nr_start, flags, pages); > >> return 0; > >> } > >> - SetPageReferenced(page); > >> - pages[*nr] = page; > >> - if (unlikely(!try_grab_page(page, flags))) { > >> - undo_dev_pagemap(nr, nr_start, flags, pages); > >> + > >> + head = compound_head(page); > >> + next = PageCompound(head) ? end : addr + PAGE_SIZE; > > > > This looks a tad messy, and makes assumptions that upper layers are > > not sending this routine multiple huge pages to map. next should be > > set to the next compound page, not end. > > Although for devmap (and same could be said for hugetlbfs), __gup_device_huge() (as called > by __gup_device_huge_{pud,pmd}) would only ever be called on a compound page which > represents the same level, as opposed to many compound pages i.e. @end already represents > the next compound page of the PMD or PUD level. > > But of course, should we represent devmap pages in geometries other than the values of > hpagesize/align other than PMD or PUD size then it's true that relying on @end value being > next compound page is fragile. But so as the rest of the surrounding code. Ok, for now maybe a: /* @end is assumed to be limited at most 1 compound page */ ...would remind whoever refactors this later about the assumption. > > > > >> + refs = record_subpages(page, addr, next, pages + *nr); > >> + > >> + SetPageReferenced(head); > >> + head = try_grab_compound_head(head, refs, flags); > >> + if (!head) { > >> + if (PageCompound(head)) { > > > > @head is NULL here, I think you wanted to rename the result of > > try_grab_compound_head() to something like pinned_head so that you > > don't undo the work you did above. > > Yes. pinned_head is what I actually should have written. Let me fix that. > > > However I feel like there's one too > > PageCompund() checks. > > > > I agree, but I am not fully sure how I can remove them :( If you fix the bug above that's sufficient for me, I may be wishing for something more pretty but is not possible in practice... > > The previous approach was to separate the logic into two distinct helpers namely > __gup_device_huge() and __gup_device_compound_huge(). But that sort of special casing > wasn't a good idea, so I tried merging both cases in __gup_device_huge() solely > differentiating on PageCompound(). > > I could make this slightly less bad by moving the error case PageCompound checks to > undo_dev_pagemap() and record_subpages(). > > But we still have the pagemap refcount to be taken until your other series removes the > need for it. So perhaps I should place the remaining PageCompound based check inside > record_subpages to accomodate the PAGE_SIZE geometry case (similarly hinted by Jason in > the previous version but that I didn't fully address). > > How does the above sound? Sounds worth a try, but not a hard requirement for this to move forward from my perspective. > > longterm once we stop having devmap use non compound struct pages on PMDs/PUDs and the > pgmap refcount on gup is removed then perhaps we can move to existing regular huge page > path that is not devmap specific. > Ok.