From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2160BC433E0 for ; Sat, 20 Feb 2021 03:34:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 96FA664EE1 for ; Sat, 20 Feb 2021 03:34:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 96FA664EE1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E49BC6B0005; Fri, 19 Feb 2021 22:34:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DD7B26B006C; Fri, 19 Feb 2021 22:34:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C26136B006E; Fri, 19 Feb 2021 22:34:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0058.hostedemail.com [216.40.44.58]) by kanga.kvack.org (Postfix) with ESMTP id A3C306B0005 for ; Fri, 19 Feb 2021 22:34:16 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6B3C31804E55E for ; Sat, 20 Feb 2021 03:34:16 +0000 (UTC) X-FDA: 77837228112.19.EB2F361 Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) by imf13.hostedemail.com (Postfix) with ESMTP id 92427E0001B2 for ; Sat, 20 Feb 2021 03:34:13 +0000 (UTC) Received: by mail-ed1-f50.google.com with SMTP id j9so13429785edp.1 for ; Fri, 19 Feb 2021 19:34:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=m6PvIqnhZ9tRiT0MCJFVIfarrGAfbfLpZZ7FO4CBjvM=; b=eT0S0SF3JAbyuWa0GOJOrjNAcWtBToOxT2HhNnXsmm+eGvKzMfGhvIpYynnhWmzh1u Hqq5BILqZwHg6wNt07nnv0jAa5wPjIbclrferoXCq6YQy4T31EqpsTtxkS87PsG/0/b5 3JqU7JfCWRewOSu7MgvjbMB3DXCqvSdwFwezBMVcvSoXxhJVEHkDFcoYWlgyhlQ5o38r W7dh5jAy/BFIZSbKzMS3f2XbzhhieoU7Sbo0/GawErPLC0zczffzxa2uZ6xuK34CkK87 q9IOOIISFBIxa0TpZQj87QYne6pGuSSuOik0i4sGnCxBrhxZPyTxgS8hSrzwXCeQKQtq J04Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=m6PvIqnhZ9tRiT0MCJFVIfarrGAfbfLpZZ7FO4CBjvM=; b=RebM05GlyXscByZkTfAwXa8+ysOIn37asaQ1nsgCH6gAhk/D/Z3Xx9GnZrTOFmQIxo 4FyVKsj1Zz1HGFwtdR/NjhqS7JsH/txSj+Al9TcmdJknHGs/vujpsNdwKJCB/j3IRXvG ILW8UWM6DGg+WxNrknEhaEQznpu1A8PNY6gUm5B4JYd0iGnn+tFKqnrIrEdrVLYekjv2 YupxUFeVzu1B09eI0z1if8zOIFu4NVxED+W0uzG4yNAayDxEX3og3qYzbbKaL3OeDgSH p596wD4AY5Ia2a+8JXSy+iiwe7bg1hPpZbx1p2Yf4ax41IDGoE/dNky7DkcgdCZna90u QOPw== X-Gm-Message-State: AOAM5336hoA7orvPzimXm5eHNOKHT0bHhxy0ZP5fggoIJUKKtCb4xzHe mb+rz1qa8fYREGiqf/JOHGMrxo8P/Tclz5d30TxX/Q== X-Google-Smtp-Source: ABdhPJyHH6oKSes3GWdn+9ZzUOKOvYZ4dVsKC6Bay0ho2xyX1M0+lpYngsVCt5vOgqcwLv1UXTaFkIbR96XX9r2cWuk= X-Received: by 2002:a05:6402:b1c:: with SMTP id bm28mr12443783edb.354.1613792053612; Fri, 19 Feb 2021 19:34:13 -0800 (PST) MIME-Version: 1.0 References: <20201208172901.17384-1-joao.m.martins@oracle.com> <20201208172901.17384-5-joao.m.martins@oracle.com> In-Reply-To: <20201208172901.17384-5-joao.m.martins@oracle.com> From: Dan Williams Date: Fri, 19 Feb 2021 19:34:05 -0800 Message-ID: Subject: Re: [PATCH RFC 3/9] sparse-vmemmap: Reuse vmemmap areas for a given page size To: Joao Martins Cc: Linux MM , Ira Weiny , linux-nvdimm , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: 7z76kdx3rrnpcc6gqcsdxyaheirgt16i X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 92427E0001B2 Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf13; identity=mailfrom; envelope-from=""; helo=mail-ed1-f50.google.com; client-ip=209.85.208.50 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1613792053-767335 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Dec 8, 2020 at 9:32 AM Joao Martins wrote: > > Introduce a new flag, MEMHP_REUSE_VMEMMAP, which signals that > struct pages are onlined with a given alignment, and should reuse the > tail pages vmemmap areas. On that circunstamce we reuse the PFN backing s/On that circunstamce we reuse/Reuse/ Kills a "we" and switches to imperative tense. I noticed a couple other "we"s in the previous patches, but this crossed my threshold to make a comment. > only the tail pages subsections, while letting the head page PFN remain > different. This presumes that the backing page structs are compound > pages, such as the case for compound pagemaps (i.e. ZONE_DEVICE with > PGMAP_COMPOUND set) > > On 2M compound pagemaps, it lets us save 6 pages out of the 8 necessary s/lets us save/saves/ > PFNs necessary s/8 necessary PFNs necessary/8 PFNs necessary/ > to describe the subsection's 32K struct pages we are > onlining. s/we are onlining/being mapped/ ...because ZONE_DEVICE pages are never "onlined". > On a 1G compound pagemap it let us save 4096 pages. s/lets us save/saves/ > > Sections are 128M (or bigger/smaller), Huh? > and such when initializing a > compound memory map where we are initializing compound struct pages, we > need to preserve the tail page to be reused across the rest of the areas > for pagesizes which bigger than a section. > > Signed-off-by: Joao Martins > --- > I wonder, rather than separating vmem_context and mhp_params, that > one would just pick the latter. Albeit semantically the ctx aren't > necessarily paramters, context passed from multiple sections onlining > (i.e. multiple calls to populate_section_memmap). Also provided that > this is internal state, which isn't passed to external modules, except > @align and @flags for page size and requesting whether to reuse tail > page areas. > --- > include/linux/memory_hotplug.h | 10 ++++ > include/linux/mm.h | 2 +- > mm/memory_hotplug.c | 12 ++++- > mm/memremap.c | 3 ++ > mm/sparse-vmemmap.c | 93 ++++++++++++++++++++++++++++------ > 5 files changed, 103 insertions(+), 17 deletions(-) > > diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h > index 73f8bcbb58a4..e15bb82805a3 100644 > --- a/include/linux/memory_hotplug.h > +++ b/include/linux/memory_hotplug.h > @@ -70,6 +70,10 @@ typedef int __bitwise mhp_t; > */ > #define MEMHP_MERGE_RESOURCE ((__force mhp_t)BIT(0)) > > +/* > + */ > +#define MEMHP_REUSE_VMEMMAP ((__force mhp_t)BIT(1)) > + > /* > * Extended parameters for memory hotplug: > * altmap: alternative allocator for memmap array (optional) > @@ -79,10 +83,16 @@ typedef int __bitwise mhp_t; > struct mhp_params { > struct vmem_altmap *altmap; > pgprot_t pgprot; > + unsigned int align; > + mhp_t flags; > }; > > struct vmem_context { > struct vmem_altmap *altmap; > + mhp_t flags; > + unsigned int align; > + void *block; > + unsigned long block_page; > }; > > /* > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 2eb44318bb2d..8b0155441835 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -3006,7 +3006,7 @@ p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); > pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); > pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); > pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, > - struct vmem_altmap *altmap); > + struct vmem_altmap *altmap, void *block); > void *vmemmap_alloc_block(unsigned long size, int node); > struct vmem_altmap; > void *vmemmap_alloc_block_buf(unsigned long size, int node, > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index f8870c53fe5e..56121dfcc44b 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -300,6 +300,14 @@ static int check_hotplug_memory_addressable(unsigned long pfn, > return 0; > } > > +static void vmem_context_init(struct vmem_context *ctx, struct mhp_params *params) > +{ > + memset(ctx, 0, sizeof(*ctx)); > + ctx->align = params->align; > + ctx->altmap = params->altmap; > + ctx->flags = params->flags; > +} > + > /* > * Reasonably generic function for adding memory. It is > * expected that archs that support memory hotplug will > @@ -313,7 +321,7 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, > unsigned long cur_nr_pages; > int err; > struct vmem_altmap *altmap = params->altmap; > - struct vmem_context ctx = { .altmap = params->altmap }; > + struct vmem_context ctx; > > if (WARN_ON_ONCE(!params->pgprot.pgprot)) > return -EINVAL; > @@ -338,6 +346,8 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, > if (err) > return err; > > + vmem_context_init(&ctx, params); > + > for (; pfn < end_pfn; pfn += cur_nr_pages) { > /* Select all remaining pages up to the next section boundary */ > cur_nr_pages = min(end_pfn - pfn, > diff --git a/mm/memremap.c b/mm/memremap.c > index 287a24b7a65a..ecfa74848ac6 100644 > --- a/mm/memremap.c > +++ b/mm/memremap.c > @@ -253,6 +253,9 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params, > goto err_kasan; > } > > + if (pgmap->flags & PGMAP_COMPOUND) > + params->align = pgmap->align; > + > error = arch_add_memory(nid, range->start, range_len(range), > params); > } > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c > index bcda68ba1381..364c071350e8 100644 > --- a/mm/sparse-vmemmap.c > +++ b/mm/sparse-vmemmap.c > @@ -141,16 +141,20 @@ void __meminit vmemmap_verify(pte_t *pte, int node, > } > > pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, > - struct vmem_altmap *altmap) > + struct vmem_altmap *altmap, void *block) > { > pte_t *pte = pte_offset_kernel(pmd, addr); > if (pte_none(*pte)) { > pte_t entry; > - void *p; > - > - p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); > - if (!p) > - return NULL; > + void *p = block; > + > + if (!block) { > + p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); > + if (!p) > + return NULL; > + } else { > + get_page(virt_to_page(block)); > + } > entry = pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL); > set_pte_at(&init_mm, addr, pte, entry); > } > @@ -216,8 +220,10 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) > return pgd; > } > > -int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, > - int node, struct vmem_altmap *altmap) > +static void *__meminit __vmemmap_populate_basepages(unsigned long start, > + unsigned long end, int node, > + struct vmem_altmap *altmap, > + void *block) > { > unsigned long addr = start; > pgd_t *pgd; > @@ -229,38 +235,95 @@ int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, > for (; addr < end; addr += PAGE_SIZE) { > pgd = vmemmap_pgd_populate(addr, node); > if (!pgd) > - return -ENOMEM; > + return NULL; > p4d = vmemmap_p4d_populate(pgd, addr, node); > if (!p4d) > - return -ENOMEM; > + return NULL; > pud = vmemmap_pud_populate(p4d, addr, node); > if (!pud) > - return -ENOMEM; > + return NULL; > pmd = vmemmap_pmd_populate(pud, addr, node); > if (!pmd) > - return -ENOMEM; > - pte = vmemmap_pte_populate(pmd, addr, node, altmap); > + return NULL; > + pte = vmemmap_pte_populate(pmd, addr, node, altmap, block); > if (!pte) > - return -ENOMEM; > + return NULL; > vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); > } > > + return __va(__pfn_to_phys(pte_pfn(*pte))); > +} > + > +int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, > + int node, struct vmem_altmap *altmap) > +{ > + if (!__vmemmap_populate_basepages(start, end, node, altmap, NULL)) > + return -ENOMEM; > return 0; > } > > +static struct page * __meminit vmemmap_populate_reuse(unsigned long start, > + unsigned long end, int node, > + struct vmem_context *ctx) > +{ > + unsigned long size, addr = start; > + unsigned long psize = PHYS_PFN(ctx->align) * sizeof(struct page); > + > + size = min(psize, end - start); > + > + for (; addr < end; addr += size) { > + unsigned long head = addr + PAGE_SIZE; > + unsigned long tail = addr; > + unsigned long last = addr + size; > + void *area; > + > + if (ctx->block_page && > + IS_ALIGNED((addr - ctx->block_page), psize)) > + ctx->block = NULL; > + > + area = ctx->block; > + if (!area) { > + if (!__vmemmap_populate_basepages(addr, head, node, > + ctx->altmap, NULL)) > + return NULL; > + > + tail = head + PAGE_SIZE; > + area = __vmemmap_populate_basepages(head, tail, node, > + ctx->altmap, NULL); > + if (!area) > + return NULL; > + > + ctx->block = area; > + ctx->block_page = addr; > + } > + > + if (!__vmemmap_populate_basepages(tail, last, node, > + ctx->altmap, area)) > + return NULL; > + } I think that compound page accounting and combined altmap accounting makes this difficult to read, and I think the compound page case deserves it's own first class loop rather than reusing vmemmap_populate_basepages(). With the suggestion to drop altmap support I'd expect a vmmemap_populate_compound that takes a compound page size and goes the right think with respect to mapping all the tail pages to the same pfn.