From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43CBFC4338F for ; Wed, 28 Jul 2021 07:28:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C0C6560F9C for ; Wed, 28 Jul 2021 07:28:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C0C6560F9C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3000A6B0036; Wed, 28 Jul 2021 03:28:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B0136B005D; Wed, 28 Jul 2021 03:28:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 19F278D0001; Wed, 28 Jul 2021 03:28:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id F2B6D6B0036 for ; Wed, 28 Jul 2021 03:28:40 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 1C9641DAF0 for ; Wed, 28 Jul 2021 07:28:40 +0000 (UTC) X-FDA: 78411169200.05.8E9D9A4 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by imf14.hostedemail.com (Postfix) with ESMTP id B2D266016101 for ; Wed, 28 Jul 2021 07:28:38 +0000 (UTC) Received: by mail-pl1-f171.google.com with SMTP id e21so1622597pla.5 for ; Wed, 28 Jul 2021 00:28:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=XS+VOHsvs0BDMQ7Od7Fj7wB7ploLT17zyS/Mm8uN2pI=; b=RBLhKPAQEl0F2Gk7RblrUV+/XCohL9gpRhVzsIVxCyc2DQdzWvvn+jvewaDvvFRanB vWcFL1JvXet7/7l75wRmtYp/zc4Et3/PNEzBvYmNp50wGUyvqSTdgEQGuONOUfI+KupC BzOYxi77TmxMxOf8FX9rJQ//sH/7NAwcBWEQjWi1ea7LbKbaTqmZr2KpAkT57AUeTcLY Z9FBgQKtSv7CU1syxDHaCTjmPHB2gFMVmbTMovy0nDukAB52Vs/qRIPdXhz6zDmxyfWq m1RqLviDyyL+TcMGX1isOUz0C3Qxrb67xq06SXpDMFgU1jn5c5eMmG5I1fdMU9ko9kMX uAfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=XS+VOHsvs0BDMQ7Od7Fj7wB7ploLT17zyS/Mm8uN2pI=; b=WXrk3lK4KLH9c90cd0z9rtmU7ryTTUeq/LaLS6aKszSLZUyC/Z+xDnufWr1S4OcvYF 7NGTEzaTubZI1EL+PIdp16LmT3nL2+vC0E9mDlZtwOecpQG83C1NZ+eHfXXPBgmNAe1J 6qjRgX6KKIaYoh+lfR4WTx1qOn7a4CKVbcJ2qGVwxQG4fejOptJ7zCoU8iWUgjDW8Xb+ kyAwO+J0EYRgQ9vY+SMRLD6adazLu8chCUXWdbyEsEVKrc4UUUWESC3quf45NJ+r14Si fQYEWJICiVtZw2XTPYqYMflottZzdSPv/x34jC2rqvrCh0Tqv0EdFIrNr2PKMXc7qJTC dMvQ== X-Gm-Message-State: AOAM5324BY4tPaqgbb73yzXXdQE2h9/N8heRgrl+Z5PhO+Gk14X4ZEY4 RMHRY6UBQiSrujwPj2IoR2RancoT0Zri58A9RtoM8A== X-Google-Smtp-Source: ABdhPJyJtFyGQxkovKsEPZbhRVYmu725P6VFThFRtFpV9oE1g4j6NFPrbbDX1aS5MD11jQfv3A+alKdgHAyZJ9Q0EG0= X-Received: by 2002:a05:6a00:d53:b029:32a:2db6:1be3 with SMTP id n19-20020a056a000d53b029032a2db61be3mr26523778pfv.71.1627457315925; Wed, 28 Jul 2021 00:28:35 -0700 (PDT) MIME-Version: 1.0 References: <20210714193542.21857-1-joao.m.martins@oracle.com> <20210714193542.21857-10-joao.m.martins@oracle.com> In-Reply-To: <20210714193542.21857-10-joao.m.martins@oracle.com> From: Dan Williams Date: Wed, 28 Jul 2021 00:28:25 -0700 Message-ID: Subject: Re: [PATCH v3 09/14] mm/page_alloc: reuse tail struct pages for compound pagemaps To: Joao Martins Cc: Linux MM , Vishal Verma , Dave Jiang , Naoya Horiguchi , Matthew Wilcox , Jason Gunthorpe , John Hubbard , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Linux NVDIMM , Linux Doc Mailing List Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=intel-com.20150623.gappssmtp.com header.s=20150623 header.b=RBLhKPAQ; spf=none (imf14.hostedemail.com: domain of dan.j.williams@intel.com has no SPF policy when checking 209.85.214.171) smtp.mailfrom=dan.j.williams@intel.com; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=intel.com (policy=none) X-Rspamd-Server: rspam02 X-Stat-Signature: d731j8e5a1euug143wina476eey9pqj9 X-Rspamd-Queue-Id: B2D266016101 X-HE-Tag: 1627457318-85773 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 14, 2021 at 12:36 PM Joao Martins wrote: > > Currently memmap_init_zone_device() ends up initializing 32768 pages > when it only needs to initialize 128 given tail page reuse. That > number is worse with 1GB compound page geometries, 262144 instead of > 128. Update memmap_init_zone_device() to skip redundant > initialization, detailed below. > > When a pgmap @geometry is set, all pages are mapped at a given huge page > alignment and use compound pages to describe them as opposed to a > struct per 4K. > > With @geometry > PAGE_SIZE and when struct pages are stored in ram > (!altmap) most tail pages are reused. Consequently, the amount of unique > struct pages is a lot smaller that the total amount of struct pages > being mapped. > > The altmap path is left alone since it does not support memory savings > based on compound pagemap geometries. > > Signed-off-by: Joao Martins > --- > mm/page_alloc.c | 14 +++++++++++++- > 1 file changed, 13 insertions(+), 1 deletion(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 188cb5f8c308..96975edac0a8 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -6600,11 +6600,23 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, > static void __ref memmap_init_compound(struct page *page, unsigned long pfn, > unsigned long zone_idx, int nid, > struct dev_pagemap *pgmap, > + struct vmem_altmap *altmap, > unsigned long nr_pages) > { > unsigned int order_align = order_base_2(nr_pages); > unsigned long i; > > + /* > + * With compound page geometry and when struct pages are stored in ram > + * (!altmap) most tail pages are reused. Consequently, the amount of > + * unique struct pages to initialize is a lot smaller that the total > + * amount of struct pages being mapped. > + * See vmemmap_populate_compound_pages(). > + */ > + if (!altmap) > + nr_pages = min_t(unsigned long, nr_pages, What's the scenario where nr_pages is < 128? Shouldn't alignment already be guaranteed? > + 2 * (PAGE_SIZE/sizeof(struct page))); > + > __SetPageHead(page); > > for (i = 1; i < nr_pages; i++) { > @@ -6657,7 +6669,7 @@ void __ref memmap_init_zone_device(struct zone *zone, > continue; > > memmap_init_compound(page, pfn, zone_idx, nid, pgmap, > - pfns_per_compound); > + altmap, pfns_per_compound); This feels odd, memmap_init_compound() doesn't really care about altmap, what do you think about explicitly calculating the parameters that memmap_init_compound() needs and passing them in? Not a strong requirement to change, but take another look at let me know. > } > > pr_info("%s initialised %lu pages in %ums\n", __func__, > -- > 2.17.1 >