From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C01DC433E7 for ; Wed, 2 Sep 2020 18:07:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F01362083B for ; Wed, 2 Sep 2020 18:07:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=sent.com header.i=@sent.com header.b="UPUzl0b8"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="idZ69wOS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F01362083B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=sent.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D156B900024; Wed, 2 Sep 2020 14:06:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5353090001F; Wed, 2 Sep 2020 14:06:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D4E0E900023; Wed, 2 Sep 2020 14:06:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0193.hostedemail.com [216.40.44.193]) by kanga.kvack.org (Postfix) with ESMTP id 28F06900024 for ; Wed, 2 Sep 2020 14:06:36 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E4AE71F0A for ; Wed, 2 Sep 2020 18:06:35 +0000 (UTC) X-FDA: 77218901550.24.wish75_3908a3c270a2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id BC3A81A4A5 for ; Wed, 2 Sep 2020 18:06:35 +0000 (UTC) X-HE-Tag: wish75_3908a3c270a2 X-Filterd-Recvd-Size: 15457 Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Wed, 2 Sep 2020 18:06:35 +0000 (UTC) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 0E0A85C016C; Wed, 2 Sep 2020 14:06:35 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Wed, 02 Sep 2020 14:06:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=from :to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; s=fm1; bh=z/QjPKfyJw+jN wp1dt2Sd4gqN+oJ4ARlb6FmyazN+3o=; b=UPUzl0b8ylt7A/Q4qNv5AkVQikCYz R+8ftNfuoYYpCrWnblBYhzHz/NSaqTQvfae5fqjUFarDKmKwMMhYhzbSO1iBLbve C3OruCDsj4T+Q99RMKcFZuplhyzJ5n2vlvSyZGeqqKGErM5MQIrS3TVwSSXhHJm/ 5KgypYQecNOLDL51BcenM+EkIFXVatutWGrLiXICH0IQy+5Zt71rvsU7BRt4KfjG gNNPzdhQJBPLF9tXhkjtLx77JoZfWSP8+NNkkNlltBjkMWhJzJGxnnm8vA6E64sl LOTcDo5U49aFdwkLmwx7iHevALTCAXwslY+5nv6uI8AmhnBAlQX6vbhZw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:reply-to:subject :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=z/QjPKfyJw+jNwp1dt2Sd4gqN+oJ4ARlb6FmyazN+3o=; b=idZ69wOS 5rl1TfFPc8QE7dt/ilcmplHQEq6ztHpUDf2l1bojIFlNoECIWDSJYuiu0lOP00XZ 0RHy+3xBSoo2eQrAPxHosyMUcYtwOeU3zmShZqzqEQ48uibRpltnsIRumjucquWr ltlRa1bdue7XI3CCfHHGHejudDeDcEMke0jpx+i7kcr6hK0qf1Vdh82CNMeb34o9 SCbIcPbqw3OAxSSK7qB4SPTSiT6pn/iPzJaD/toZwIGlgA9UxYHRH9YwNdAZcIqY PMCPE/fXXrGD7uVs5a0oyQ66FPvBsT8K38pbPWTCQkpwncybqoBzBkYp6iWiwmjE +5TQuBBbHsB06w== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefledguddvudcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhhrggfgsedtkeertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeduhf ffveektdduhfdutdfgtdekkedvhfetuedufedtgffgvdevleehheevjefgtdenucfkphep uddvrdegiedruddtiedrudeigeenucevlhhushhtvghrufhiiigvpeduvdenucfrrghrrg hmpehmrghilhhfrhhomhepiihirdihrghnsehsvghnthdrtghomh X-ME-Proxy: Received: from nvrsysarch6.NVidia.COM (unknown [12.46.106.164]) by mail.messagingengine.com (Postfix) with ESMTPA id 5017E30600B7; Wed, 2 Sep 2020 14:06:34 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org, Roman Gushchin Cc: Rik van Riel , "Kirill A . Shutemov" , Matthew Wilcox , Shakeel Butt , Yang Shi , David Nellans , linux-kernel@vger.kernel.org, Zi Yan Subject: [RFC PATCH 15/16] hugetlb: cma: move cma reserve function to cma.c. Date: Wed, 2 Sep 2020 14:06:27 -0400 Message-Id: <20200902180628.4052244-16-zi.yan@sent.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902180628.4052244-1-zi.yan@sent.com> References: <20200902180628.4052244-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Queue-Id: BC3A81A4A5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan It will be used by other allocations, like 1GB THP allocation in the upcoming commit. Signed-off-by: Zi Yan --- .../admin-guide/kernel-parameters.txt | 2 +- arch/arm64/mm/hugetlbpage.c | 2 +- arch/powerpc/mm/hugetlbpage.c | 2 +- arch/x86/kernel/setup.c | 8 +- include/linux/cma.h | 15 ++++ include/linux/hugetlb.h | 12 --- mm/cma.c | 88 +++++++++++++++++++ mm/hugetlb.c | 88 ++----------------- 8 files changed, 118 insertions(+), 99 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentat= ion/admin-guide/kernel-parameters.txt index 68fee5e034ca..600668ee0ac7 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1507,7 +1507,7 @@ hpet_mmap=3D [X86, HPET_MMAP] Allow userspace to mmap HPET registers. Default set by CONFIG_HPET_MMAP_DEFAULT. =20 - hugetlb_cma=3D [HW] The size of a cma area used for allocation + hugepage_cma=3D [HW] The size of a cma area used for allocation of gigantic hugepages. Format: nn[KMGTPE] =20 diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 55ecf6de9ff7..8a3ad7eaae49 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -52,7 +52,7 @@ void __init arm64_hugetlb_cma_reserve(void) * breaking this assumption. */ WARN_ON(order <=3D MAX_ORDER); - hugetlb_cma_reserve(order); + hugepage_cma_reserve(order); } #endif /* CONFIG_CMA */ =20 diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.= c index 26292544630f..d608e58cb69b 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -699,6 +699,6 @@ void __init gigantic_hugetlb_cma_reserve(void) =20 if (order) { VM_WARN_ON(order < MAX_ORDER); - hugetlb_cma_reserve(order); + hugepage_cma_reserve(order); } } diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 52e83ba607b3..93c8fbdff972 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -16,7 +16,7 @@ #include #include #include -#include +#include #include #include =20 @@ -640,7 +640,7 @@ static void __init trim_snb_memory(void) * already been reserved. */ memblock_reserve(0, 1<<20); -=09 + for (i =3D 0; i < ARRAY_SIZE(bad_pages); i++) { if (memblock_reserve(bad_pages[i], PAGE_SIZE)) printk(KERN_WARNING "failed to reserve 0x%08lx\n", @@ -732,7 +732,7 @@ static void __init trim_low_memory_range(void) { memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE)); } -=09 + /* * Dump out kernel offset information on panic. */ @@ -1142,7 +1142,7 @@ void __init setup_arch(char **cmdline_p) dma_contiguous_reserve(max_pfn_mapped << PAGE_SHIFT); =20 if (boot_cpu_has(X86_FEATURE_GBPAGES)) - hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT); + hugepage_cma_reserve(PUD_SHIFT - PAGE_SHIFT); =20 /* * Reserve memory for crash kernel after SRAT is parsed so that it diff --git a/include/linux/cma.h b/include/linux/cma.h index 6ff79fefd01f..abcf7ab712f9 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -47,4 +47,19 @@ extern struct page *cma_alloc(struct cma *cma, size_t = count, unsigned int align, extern bool cma_release(struct cma *cma, const struct page *pages, unsig= ned int count); =20 extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), voi= d *data); + +extern void cma_reserve(int min_order, unsigned long requested_size, + const char *name, struct cma *cma_struct[N_MEMORY]); +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) +extern void __init hugepage_cma_reserve(int order); +extern void __init hugepage_cma_check(void); +#else +static inline void __init hugepage_cma_check(void) +{ +} +static inline void __init hugepage_cma_reserve(int order) +{ +} +#endif + #endif diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d5cc5f802dd4..087d13a1dc24 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -935,16 +935,4 @@ static inline spinlock_t *huge_pte_lock(struct hstat= e *h, return ptl; } =20 -#if defined(CONFIG_HUGETLB_PAGE) && defined(CONFIG_CMA) -extern void __init hugetlb_cma_reserve(int order); -extern void __init hugetlb_cma_check(void); -#else -static inline __init void hugetlb_cma_reserve(int order) -{ -} -static inline __init void hugetlb_cma_check(void) -{ -} -#endif - #endif /* _LINUX_HUGETLB_H */ diff --git a/mm/cma.c b/mm/cma.c index 7f415d7cda9f..aa3a17d8a191 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -37,6 +37,10 @@ #include "cma.h" =20 struct cma cma_areas[MAX_CMA_AREAS]; +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) +struct cma *hugepage_cma[MAX_NUMNODES]; +#endif +unsigned long hugepage_cma_size __initdata; unsigned cma_area_count; static DEFINE_MUTEX(cma_mutex); =20 @@ -541,3 +545,87 @@ int cma_for_each_area(int (*it)(struct cma *cma, voi= d *data), void *data) =20 return 0; } + +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) +/* + * cma_reserve() - reserve CMA for gigantic pages on nodes with memory + * + * must be called after free_area_init() that updates N_MEMORY via node_= set_state(). + * cma_reserve() scans over N_MEMORY nodemask and hence expects the plat= forms + * to have initialized N_MEMORY state. + */ +void __init cma_reserve(int min_order, unsigned long requested_size, con= st char *name, + struct cma *cma_struct[MAX_NUMNODES]) +{ + unsigned long size, reserved, per_node; + int nid; + + if (!requested_size) + return; + + if (requested_size < (PAGE_SIZE << min_order)) { + pr_warn("%s_cma: cma area should be at least %lu MiB\n", + name, (PAGE_SIZE << min_order) / SZ_1M); + return; + } + + /* + * If 3 GB area is requested on a machine with 4 numa nodes, + * let's allocate 1 GB on first three nodes and ignore the last one. + */ + per_node =3D DIV_ROUND_UP(requested_size, nr_online_nodes); + pr_info("%s_cma: reserve %lu MiB, up to %lu MiB per node\n", + name, requested_size / SZ_1M, per_node / SZ_1M); + + reserved =3D 0; + for_each_node_state(nid, N_ONLINE) { + int res; + char node_name[20]; + + size =3D min(per_node, requested_size - reserved); + size =3D round_up(size, PAGE_SIZE << min_order); + + snprintf(node_name, 20, "%s%d", name, nid); + res =3D cma_declare_contiguous_nid(0, size, 0, + PAGE_SIZE << min_order, + 0, false, node_name, + &cma_struct[nid], nid); + if (res) { + pr_warn("%s_cma: reservation failed: err %d, node %d", + name, res, nid); + continue; + } + + reserved +=3D size; + pr_info("%s_cma: reserved %lu MiB on node %d\n", + name, size / SZ_1M, nid); + + if (reserved >=3D requested_size) + break; + } +} + +static bool hugepage_cma_reserve_called __initdata; + +static int __init cmdline_parse_hugepage_cma(char *p) +{ + hugepage_cma_size =3D memparse(p, &p); + return 0; +} + +early_param("hugepage_cma", cmdline_parse_hugepage_cma); + +void __init hugepage_cma_reserve(int order) +{ + hugepage_cma_reserve_called =3D true; + cma_reserve(order, hugepage_cma_size, "hugepage", hugepage_cma); +} + +void __init hugepage_cma_check(void) +{ + if (!hugepage_cma_size || hugepage_cma_reserve_called) + return; + + pr_warn("hugepage_cma: the option isn't supported by current arch\n"); +} +#endif diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d5357778b026..6685cad879d0 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -48,9 +48,9 @@ unsigned int default_hstate_idx; struct hstate hstates[HUGE_MAX_HSTATE]; =20 #ifdef CONFIG_CMA -static struct cma *hugetlb_cma[MAX_NUMNODES]; +extern struct cma *hugepage_cma[MAX_NUMNODES]; #endif -static unsigned long hugetlb_cma_size __initdata; +extern unsigned long hugepage_cma_size __initdata; =20 /* * Minimum page order among possible hugepage sizes, set to a proper val= ue @@ -1218,7 +1218,7 @@ static void free_gigantic_page(struct page *page, u= nsigned int order) * cma_release() returns false. */ #ifdef CONFIG_CMA - if (cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)) + if (cma_release(hugepage_cma[page_to_nid(page)], page, 1 << order)) return; #endif =20 @@ -1237,10 +1237,10 @@ static struct page *alloc_gigantic_page(struct hs= tate *h, gfp_t gfp_mask, int node; =20 for_each_node_mask(node, *nodemask) { - if (!hugetlb_cma[node]) + if (!hugepage_cma[node]) continue; =20 - page =3D cma_alloc(hugetlb_cma[node], nr_pages, + page =3D cma_alloc(hugepage_cma[node], nr_pages, huge_page_order(h), true); if (page) return page; @@ -2532,8 +2532,8 @@ static void __init hugetlb_hstate_alloc_pages(struc= t hstate *h) =20 for (i =3D 0; i < h->max_huge_pages; ++i) { if (hstate_is_gigantic(h)) { - if (hugetlb_cma_size) { - pr_warn_once("HugeTLB: hugetlb_cma is enabled, skip boot time alloca= tion\n"); + if (hugepage_cma_size) { + pr_warn_once("HugeTLB: hugepage_cma is enabled, skip boot time alloc= ation\n"); break; } if (!alloc_bootmem_huge_page(h)) @@ -3209,7 +3209,7 @@ static int __init hugetlb_init(void) } } =20 - hugetlb_cma_check(); + hugepage_cma_check(); hugetlb_init_hstates(); gather_bootmem_prealloc(); report_hugepages(); @@ -5622,75 +5622,3 @@ void move_hugetlb_state(struct page *oldpage, stru= ct page *newpage, int reason) spin_unlock(&hugetlb_lock); } } - -#ifdef CONFIG_CMA -static bool cma_reserve_called __initdata; - -static int __init cmdline_parse_hugetlb_cma(char *p) -{ - hugetlb_cma_size =3D memparse(p, &p); - return 0; -} - -early_param("hugetlb_cma", cmdline_parse_hugetlb_cma); - -void __init hugetlb_cma_reserve(int order) -{ - unsigned long size, reserved, per_node; - int nid; - - cma_reserve_called =3D true; - - if (!hugetlb_cma_size) - return; - - if (hugetlb_cma_size < (PAGE_SIZE << order)) { - pr_warn("hugetlb_cma: cma area should be at least %lu MiB\n", - (PAGE_SIZE << order) / SZ_1M); - return; - } - - /* - * If 3 GB area is requested on a machine with 4 numa nodes, - * let's allocate 1 GB on first three nodes and ignore the last one. - */ - per_node =3D DIV_ROUND_UP(hugetlb_cma_size, nr_online_nodes); - pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n", - hugetlb_cma_size / SZ_1M, per_node / SZ_1M); - - reserved =3D 0; - for_each_node_state(nid, N_ONLINE) { - int res; - char name[20]; - - size =3D min(per_node, hugetlb_cma_size - reserved); - size =3D round_up(size, PAGE_SIZE << order); - - snprintf(name, 20, "hugetlb%d", nid); - res =3D cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order, - 0, false, name, - &hugetlb_cma[nid], nid); - if (res) { - pr_warn("hugetlb_cma: reservation failed: err %d, node %d", - res, nid); - continue; - } - - reserved +=3D size; - pr_info("hugetlb_cma: reserved %lu MiB on node %d\n", - size / SZ_1M, nid); - - if (reserved >=3D hugetlb_cma_size) - break; - } -} - -void __init hugetlb_cma_check(void) -{ - if (!hugetlb_cma_size || cma_reserve_called) - return; - - pr_warn("hugetlb_cma: the option isn't supported by current arch\n"); -} - -#endif /* CONFIG_CMA */ --=20 2.28.0