From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3829EC3DA6B for ; Wed, 31 Aug 2022 15:52:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231917AbiHaPwf (ORCPT ); Wed, 31 Aug 2022 11:52:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231226AbiHaPwd (ORCPT ); Wed, 31 Aug 2022 11:52:33 -0400 Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com [IPv6:2607:f8b0:4864:20::1136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFF2FAA4D6 for ; Wed, 31 Aug 2022 08:52:31 -0700 (PDT) Received: by mail-yw1-x1136.google.com with SMTP id 00721157ae682-33dc345ad78so311444367b3.3 for ; Wed, 31 Aug 2022 08:52:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc; bh=DPh7ToX/xZeKd5sie6f3GwYv1dgrGEWOsdwYUdRqOV8=; b=Ib8pAgAXpi66eUIrDx9HR42EQ83DMfbld1dlC8/B6EAvidr1kvmEAZeUXRt94L1+CY fOk9VsOVmUuFoA3ousl8Ls9nH8k8tfM+rNhI6xbMFrdhErQiM5hQSsqd2rBTC4+hb+jz x5t/VHEd7zMohGNGfLtkKMabhYYj05y3VRMNFTksjD0TxoUQD1VZClnayPhKj0nf7p9J AKzTfeLiXAmgNrt2WdWSuelER8LXAcfC5HgCInP5/nDUlNIINl8G7Ik3e4I5/pB0YrkX Qcp8GP0MNCkY9+ymMPvS4UyuNVxGZAw/VRq+/lxc53pSVnVgD3IP07MsNXunkZUp3HjH knuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=DPh7ToX/xZeKd5sie6f3GwYv1dgrGEWOsdwYUdRqOV8=; b=em8b5SNFo+Ko1SgRiIzpYjoIVtwVTexQGqydZZq20xpsprj16/o1Y2hFE5/Y3Fe7Cj 7eYSw+C8WlkmEQLLWPb2zt0PWn+3lJqDjKKwFAb136Gt1FJr8oHQKvlViz8+2la74Fvl bO+gPBEjp2Xsn1gba4pvWJOf3uxzjnDBc2fZ2TqJ14BXHGV8MZfNPycqMhB8nGoh5nKL 1GG4PUWv/qDHYvbXTUaRLzc8+YReoNlxKt4IZ4a67SGtt+UF+109RRpGaPN4n4vArJDS /Fgsh/ENYR0CbdScTTTqty31N+wH+udWDEwr22mYSeEhsIgG5GUzf7TGU12Q+Dx/LpIO lugw== X-Gm-Message-State: ACgBeo11J00ybWeubOQEuJsylZ4OmfOuN3Jq5x3iB8NaFgP2EPVLfW1R BdnSKL6gHRljxYB8kk8RxgtXCBq3BJmjO0QJK9uoaw== X-Google-Smtp-Source: AA6agR77ayVKimqTSvnLuFWBEKBbMWCXeZt/MyEcvQRogVe/KquV1BGkRaxT2hK9yX4AVYf+RasGcwvoXy2efvclpMo= X-Received: by 2002:a0d:d850:0:b0:340:d2c0:b022 with SMTP id a77-20020a0dd850000000b00340d2c0b022mr16260868ywe.469.1661961150716; Wed, 31 Aug 2022 08:52:30 -0700 (PDT) MIME-Version: 1.0 References: <20220830214919.53220-1-surenb@google.com> <20220830214919.53220-11-surenb@google.com> <20220831101103.fj5hjgy3dbb44fit@suse.de> In-Reply-To: From: Suren Baghdasaryan Date: Wed, 31 Aug 2022 08:52:19 -0700 Message-ID: Subject: Re: [RFC PATCH 10/30] mm: enable page allocation tagging for __get_free_pages and alloc_pages To: Mel Gorman Cc: Andrew Morton , Kent Overstreet , Michal Hocko , Vlastimil Babka , Johannes Weiner , Roman Gushchin , Davidlohr Bueso , Matthew Wilcox , "Liam R. Howlett" , David Vernet , Peter Zijlstra , Juri Lelli , Laurent Dufour , Peter Xu , David Hildenbrand , Jens Axboe , mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, changbin.du@intel.com, ytcoode@gmail.com, Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Benjamin Segall , Daniel Bristot de Oliveira , Valentin Schneider , Christopher Lameter , Pekka Enberg , Joonsoo Kim , 42.hyeyoo@gmail.com, Alexander Potapenko , Marco Elver , dvyukov@google.com, Shakeel Butt , Muchun Song , arnd@arndb.de, jbaron@akamai.com, David Rientjes , Minchan Kim , Kalesh Singh , kernel-team , linux-mm , iommu@lists.linux.dev, kasan-dev@googlegroups.com, io-uring@vger.kernel.org, linux-arch@vger.kernel.org, xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org, linux-modules@vger.kernel.org, LKML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-bcache@vger.kernel.org On Wed, Aug 31, 2022 at 8:45 AM Suren Baghdasaryan wrote: > > On Wed, Aug 31, 2022 at 3:11 AM Mel Gorman wrote: > > > > On Tue, Aug 30, 2022 at 02:48:59PM -0700, Suren Baghdasaryan wrote: > > > Redefine alloc_pages, __get_free_pages to record allocations done by > > > these functions. Instrument deallocation hooks to record object freeing. > > > > > > Signed-off-by: Suren Baghdasaryan > > > +#ifdef CONFIG_PAGE_ALLOC_TAGGING > > > + > > > #include > > > #include > > > > > > @@ -25,4 +27,37 @@ static inline void pgalloc_tag_dec(struct page *page, unsigned int order) > > > alloc_tag_sub(get_page_tag_ref(page), PAGE_SIZE << order); > > > } > > > > > > +/* > > > + * Redefinitions of the common page allocators/destructors > > > + */ > > > +#define pgtag_alloc_pages(gfp, order) \ > > > +({ \ > > > + struct page *_page = _alloc_pages((gfp), (order)); \ > > > + \ > > > + if (_page) \ > > > + alloc_tag_add(get_page_tag_ref(_page), PAGE_SIZE << (order));\ > > > + _page; \ > > > +}) > > > + > > > > Instead of renaming alloc_pages, why is the tagging not done in > > __alloc_pages()? At least __alloc_pages_bulk() is also missed. The branch > > can be guarded with IS_ENABLED. > > Hmm. Assuming all the other allocators using __alloc_pages are inlined, that > should work. I'll try that and if that works will incorporate in the > next respin. > Thanks! > > I don't think IS_ENABLED is required because the tagging functions are already > defined as empty if the appropriate configs are not enabled. Unless I > misunderstood > your node. > > > > > > +#define pgtag_get_free_pages(gfp_mask, order) \ > > > +({ \ > > > + struct page *_page; \ > > > + unsigned long _res = _get_free_pages((gfp_mask), (order), &_page);\ > > > + \ > > > + if (_res) \ > > > + alloc_tag_add(get_page_tag_ref(_page), PAGE_SIZE << (order));\ > > > + _res; \ > > > +}) > > > + > > > > Similar, the tagging could happen in a core function instead of a wrapper. Ack. > > > > > +#else /* CONFIG_PAGE_ALLOC_TAGGING */ > > > + > > > +#define pgtag_alloc_pages(gfp, order) _alloc_pages(gfp, order) > > > + > > > +#define pgtag_get_free_pages(gfp_mask, order) \ > > > + _get_free_pages((gfp_mask), (order), NULL) > > > + > > > +#define pgalloc_tag_dec(__page, __size) do {} while (0) > > > + > > > +#endif /* CONFIG_PAGE_ALLOC_TAGGING */ > > > + > > > #endif /* _LINUX_PGALLOC_TAG_H */ > > > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > > > index b73d3248d976..f7e6d9564a49 100644 > > > --- a/mm/mempolicy.c > > > +++ b/mm/mempolicy.c > > > @@ -2249,7 +2249,7 @@ EXPORT_SYMBOL(vma_alloc_folio); > > > * flags are used. > > > * Return: The page on success or NULL if allocation fails. > > > */ > > > -struct page *alloc_pages(gfp_t gfp, unsigned order) > > > +struct page *_alloc_pages(gfp_t gfp, unsigned int order) > > > { > > > struct mempolicy *pol = &default_policy; > > > struct page *page; > > > @@ -2273,7 +2273,7 @@ struct page *alloc_pages(gfp_t gfp, unsigned order) > > > > > > return page; > > > } > > > -EXPORT_SYMBOL(alloc_pages); > > > +EXPORT_SYMBOL(_alloc_pages); > > > > > > struct folio *folio_alloc(gfp_t gfp, unsigned order) > > > { > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > > index e5486d47406e..165daba19e2a 100644 > > > --- a/mm/page_alloc.c > > > +++ b/mm/page_alloc.c > > > @@ -763,6 +763,7 @@ static inline bool pcp_allowed_order(unsigned int order) > > > > > > static inline void free_the_page(struct page *page, unsigned int order) > > > { > > > + > > > if (pcp_allowed_order(order)) /* Via pcp? */ > > > free_unref_page(page, order); > > > else > > > > Spurious wide-space change. Ack. > > > > -- > > Mel Gorman > > SUSE Labs