From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51D60C1B08C for ; Thu, 15 Jul 2021 04:45:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DF6E36128D for ; Thu, 15 Jul 2021 04:45:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DF6E36128D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3FA9A8D005A; Thu, 15 Jul 2021 00:45:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D0758D004B; Thu, 15 Jul 2021 00:45:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 271148D005A; Thu, 15 Jul 2021 00:45:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0162.hostedemail.com [216.40.44.162]) by kanga.kvack.org (Postfix) with ESMTP id 0171A8D004B for ; Thu, 15 Jul 2021 00:45:45 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EDC2A1AA69 for ; Thu, 15 Jul 2021 04:45:44 +0000 (UTC) X-FDA: 78363584208.08.4C3F659 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id A6152900025E for ; Thu, 15 Jul 2021 04:45:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aG1Z62nXu6WKzoz8firz+hfiIW2uMXHCr6KhoOFkmPs=; b=oz14Kbgn0ye+mHHr4JYMPXRXQ/ jMiQv7CeFELvEQTwiWKIVA5/NJvLuHUYf3xoMbrptK2wkLCi4w9HSWiAh+Lk63OccRvLaN4kBQUes ILpveBh9EIhnkVmS66/0WNcMc7TrM+pGzzj7ERCsOtWwAByfiIxLS/a6jEU6PLolsKJWDFVCSGZ++ lTrBWnB7SCxX6yuLgxmety10V9s0Z7a+04TSfqFrWH+0rpzoBk3BGA9/KqDmbYahsGKhuLCp+3St8 fFUk3vGWm2cKY9hdFfz/omF22lB/ynQidIIUmD+tmc0acWE+CGYd8uSDlBUyDnVuglU+6FQAagmpw Vq+63ZDg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tEC-002yQB-Mp; Thu, 15 Jul 2021 04:44:25 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 084/138] mm/page_alloc: Add folio allocation functions Date: Thu, 15 Jul 2021 04:36:10 +0100 Message-Id: <20210715033704.692967-85-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oz14Kbgn; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: n3pmzjsx7qetwqwyt6dja3xn9yskgnoo X-Rspamd-Queue-Id: A6152900025E X-HE-Tag: 1626324344-904389 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The __folio_alloc(), __folio_alloc_node() and folio_alloc() functions are mostly for type safety, but they also ensure that the page allocator allocates a compound page and initialises the deferred list if the page is large enough to have one. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/gfp.h | 16 ++++++++++++++++ mm/mempolicy.c | 10 ++++++++++ mm/page_alloc.c | 12 ++++++++++++ 3 files changed, 38 insertions(+) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index dc5ff40608ce..3745efd21cf6 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -523,6 +523,8 @@ static inline void arch_alloc_page(struct page *page,= int order) { } =20 struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_= nid, nodemask_t *nodemask); +struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred= _nid, + nodemask_t *nodemask); =20 unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, @@ -564,6 +566,15 @@ __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned= int order) return __alloc_pages(gfp_mask, order, nid, NULL); } =20 +static inline +struct folio *__folio_alloc_node(gfp_t gfp, unsigned int order, int nid) +{ + VM_BUG_ON(nid < 0 || nid >=3D MAX_NUMNODES); + VM_WARN_ON((gfp & __GFP_THISNODE) && !node_online(nid)); + + return __folio_alloc(gfp, order, nid, NULL); +} + /* * Allocate pages, preferring the node given as nid. When nid =3D=3D NUM= A_NO_NODE, * prefer the current CPU's closest node. Otherwise node must be valid a= nd @@ -580,6 +591,7 @@ static inline struct page *alloc_pages_node(int nid, = gfp_t gfp_mask, =20 #ifdef CONFIG_NUMA struct page *alloc_pages(gfp_t gfp, unsigned int order); +struct folio *folio_alloc(gfp_t gfp, unsigned order); extern struct page *alloc_pages_vma(gfp_t gfp_mask, int order, struct vm_area_struct *vma, unsigned long addr, int node, bool hugepage); @@ -590,6 +602,10 @@ static inline struct page *alloc_pages(gfp_t gfp_mas= k, unsigned int order) { return alloc_pages_node(numa_node_id(), gfp_mask, order); } +static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order) +{ + return __folio_alloc_node(gfp, order, numa_node_id()); +} #define alloc_pages_vma(gfp_mask, order, vma, addr, node, false)\ alloc_pages(gfp_mask, order) #define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index e32360e90274..95d0cf05f7ca 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2249,6 +2249,16 @@ struct page *alloc_pages(gfp_t gfp, unsigned order= ) } EXPORT_SYMBOL(alloc_pages); =20 +struct folio *folio_alloc(gfp_t gfp, unsigned order) +{ + struct page *page =3D alloc_pages(gfp | __GFP_COMP, order); + + if (page && order > 1) + prep_transhuge_page(page); + return (struct folio *)page; +} +EXPORT_SYMBOL(folio_alloc); + int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *ds= t) { struct mempolicy *pol =3D mpol_dup(vma_policy(src)); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d72a0d9d4184..d03145671934 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5399,6 +5399,18 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int= order, int preferred_nid, } EXPORT_SYMBOL(__alloc_pages); =20 +struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred= _nid, + nodemask_t *nodemask) +{ + struct page *page =3D __alloc_pages(gfp | __GFP_COMP, order, + preferred_nid, nodemask); + + if (page && order > 1) + prep_transhuge_page(page); + return (struct folio *)page; +} +EXPORT_SYMBOL(__folio_alloc); + /* * Common helper functions. Never use with __GFP_HIGHMEM because the ret= urned * address cannot represent highmem pages. Use alloc_pages and then kmap= if --=20 2.30.2