From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DF2BC4338F for ; Wed, 28 Jul 2021 18:03:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C05A56101E for ; Wed, 28 Jul 2021 18:03:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C05A56101E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D55E46B0033; Wed, 28 Jul 2021 14:03:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D058F6B0036; Wed, 28 Jul 2021 14:03:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF4006B005D; Wed, 28 Jul 2021 14:03:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id A26F36B0033 for ; Wed, 28 Jul 2021 14:03:30 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 47580127AD for ; Wed, 28 Jul 2021 18:03:30 +0000 (UTC) X-FDA: 78412768980.10.8FBA38B Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf24.hostedemail.com (Postfix) with ESMTP id E0C83B00992A for ; Wed, 28 Jul 2021 18:03:28 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id q2so3625187plr.11 for ; Wed, 28 Jul 2021 11:03:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=R1mxyscdyw51Q+xf2U4a9dfGTlTYfyhHnE9YRUiO79A=; b=SezFOYExPlp2dNoYE7NNWxkf9d08LcuwryvGTIgNr4enzdy1NZPbjKqzT067NwUsj+ gEBJgxQcIlH7d1D0ySI8ZNjxmiIPC45JQWjzaiZcTsEPkE9tnOm9cV0FV3x0I5toyM81 F+vkMTUGoa/bsDR8/raykp4tWGlhNqaJvLZwL+kAGGQqA+7dEeTq5OjXTYYk808qWzPX KVrhPXeps4RmHXPf5kIqT0dsEE9y6YUtcrv4slqPS73Uh7k7ohYoeAACdAy1mMW6Dsay G7LIFkJXfyy2iAhfCoW5ldaFu0gigb5sA8O+u8es4k7h2tXuZPZ1l1Vb9wsxRtux7m0i xkUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=R1mxyscdyw51Q+xf2U4a9dfGTlTYfyhHnE9YRUiO79A=; b=lSlKtkYeYhRfnewuUvI9BtQyC865U6HdB2RjAtPNq71oaPF5MzhdxgNU202N/jaVG8 7wW/qDXcKpRFCTh24MyJJmQyRpC/T1j85J1TUexTQgDYiWIpVjBPr/zc2gJ6iU49ri6T Y8GWqOl7x7Vto4ZDjaxYBe/PcUfFNfZWyH7na5Nnrr6Torg6UHXZJ/5DnZEt8aJHu1TA ucXnZTEDJTYxRWi6Yn88cfWVBbf1NVXaPGCrAeh750PvexCVR0hIsiPHknlaJrgRsVvO o6TJxbbXnaaBEMYxXICXzLuGFLE781K8TJHS+xnAyI/Bvj6PCISKAOZ+zcHmyw1Pbhlq 2JkQ== X-Gm-Message-State: AOAM532T1ybORYFpPsnHjOOyCaX4pSNzxxK07keCL3zc778BQZVgOApk 8lIVGzgK0Tq2KlMvoLvDkzPuOSJsiYUkF0Z2Q2d6+w== X-Google-Smtp-Source: ABdhPJzYa5H9v4fabmNToov9Ab65TCi56koXLkxBg5XiPcwGw8nhQ8zvGZgpQia+SnmV+M49yjLIgI7TyF0MgjR/K/Q= X-Received: by 2002:a17:90a:708c:: with SMTP id g12mr10908790pjk.13.1627495407682; Wed, 28 Jul 2021 11:03:27 -0700 (PDT) MIME-Version: 1.0 References: <20210714193542.21857-1-joao.m.martins@oracle.com> <20210714193542.21857-9-joao.m.martins@oracle.com> <131e77ec-6de4-8401-e7b0-7ff12abac04c@oracle.com> In-Reply-To: <131e77ec-6de4-8401-e7b0-7ff12abac04c@oracle.com> From: Dan Williams Date: Wed, 28 Jul 2021 11:03:16 -0700 Message-ID: Subject: Re: [PATCH v3 08/14] mm/sparse-vmemmap: populate compound pagemaps To: Joao Martins Cc: Linux MM , Vishal Verma , Dave Jiang , Naoya Horiguchi , Matthew Wilcox , Jason Gunthorpe , John Hubbard , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Linux NVDIMM , Linux Doc Mailing List Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E0C83B00992A Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel-com.20150623.gappssmtp.com header.s=20150623 header.b=SezFOYEx; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=intel.com (policy=none); spf=none (imf24.hostedemail.com: domain of dan.j.williams@intel.com has no SPF policy when checking 209.85.214.173) smtp.mailfrom=dan.j.williams@intel.com X-Stat-Signature: 7nmmd33hutu8cym47xukucq5tecgdqxc X-HE-Tag: 1627495408-531355 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 28, 2021 at 8:36 AM Joao Martins wrote: [..] > +/* > + * For compound pages bigger than section size (e.g. x86 1G compound > + * pages with 2M subsection size) fill the rest of sections as tail > + * pages. > + * > + * Note that memremap_pages() resets @nr_range value and will increment > + * it after each range successful onlining. Thus the value or @nr_range > + * at section memmap populate corresponds to the in-progress range > + * being onlined here. > + */ > +static bool compound_section_index(unsigned long start_pfn, Oh, I was thinking this would return the actual Nth index number for the section within the compound page. A bool is ok too, but then the function name would be something like: reuse_compound_section() ...right? [..] > [...] And here's compound_section_tail_huge_page() (for the last patch in the series): > > > @@ -690,6 +727,33 @@ static struct page * __meminit compound_section_tail_page(unsigned > long addr) > return pte_page(*ptep); > } > > +static struct page * __meminit compound_section_tail_huge_page(unsigned long addr, > + unsigned long offset, struct dev_pagemap *pgmap) > +{ > + unsigned long geometry_size = pgmap_geometry(pgmap) << PAGE_SHIFT; > + pmd_t *pmdp; > + > + addr -= PAGE_SIZE; > + > + /* > + * Assuming sections are populated sequentially, the previous section's > + * page data can be reused. > + */ > + pmdp = pmd_off_k(addr); > + if (!pmdp) > + return ERR_PTR(-ENOMEM); > + > + /* > + * Reuse the tail pages vmemmap pmd page > + * See layout diagram in Documentation/vm/vmemmap_dedup.rst > + */ > + if (offset % geometry_size > PFN_PHYS(PAGES_PER_SECTION)) > + return pmd_page(*pmdp); > + > + /* No reusable PMD fallback to PTE tail page*/ > + return NULL; > +} > + > static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, > unsigned long start, > unsigned long end, int node, > @@ -697,14 +761,22 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long > start_pfn, > { > unsigned long offset, size, addr; > > - if (compound_section_index(start_pfn, pgmap)) { > - struct page *page; > + if (compound_section_index(start_pfn, pgmap, &offset)) { > + struct page *page, *hpage; > + > + hpage = compound_section_tail_huge_page(addr, offset); > + if (IS_ERR(hpage)) > + return -ENOMEM; > + else if (hpage) No need for "else" after return... other than that these helpers and this arrangement looks good to me.