From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79F59C433E2 for ; Wed, 2 Sep 2020 18:07:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 288E52083B for ; Wed, 2 Sep 2020 18:07:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=sent.com header.i=@sent.com header.b="TEpelWRj"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="SVTFniIB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 288E52083B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=sent.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0F74D90001F; Wed, 2 Sep 2020 14:06:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 82838900023; Wed, 2 Sep 2020 14:06:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1ADAD900024; Wed, 2 Sep 2020 14:06:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 7DBE3900012 for ; Wed, 2 Sep 2020 14:06:36 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 324523632 for ; Wed, 2 Sep 2020 18:06:36 +0000 (UTC) X-FDA: 77218901592.27.time91_590628c270a2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 016FB3D668 for ; Wed, 2 Sep 2020 18:06:35 +0000 (UTC) X-HE-Tag: time91_590628c270a2 X-Filterd-Recvd-Size: 10613 Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Wed, 2 Sep 2020 18:06:35 +0000 (UTC) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 3434E5C01D3; Wed, 2 Sep 2020 14:06:35 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Wed, 02 Sep 2020 14:06:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=from :to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; s=fm1; bh=vo+WlP2P6ZaoJ IBCKZI+fyRIw5q3MoMn49o4u3ecZgg=; b=TEpelWRjPEIN4vRp/+z6iP4vNaUPK TCfhY2IrfliRvbztLqUxe/30zaxBLRMNaOfnCx4pTfN+O/8DNd+Pt3aXgK5LLiId 2X3/ltk49Uhbc1yU2cSrD7mnSHuQbOcH7vRCPFnXZuYWbyTdmOxM1ldW0UNO89+w ZAfcvluypPsVDAYYBqmdGlLd42+zsXSV5J/z1vL4tvDVfhZ3ATygP5WeCGNGR7DS 75E9INuE3acxE/LvEpuGf8LVWWa/Iqrhei2VQAvU29p2DbZvX1snfWYdZnqv1qzf oj1hUediWQjrhLVVizqERzNJfk9Y5GjqzAfQG8JNrnyTNus8FC6pInQ/w== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:reply-to:subject :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=vo+WlP2P6ZaoJIBCKZI+fyRIw5q3MoMn49o4u3ecZgg=; b=SVTFniIB rrl/tDZwQceF1nSi0Gj3wgFNyMZ7A2wCl+DZP9PuJ3elUf1hNTd7D2tJADQzuAGO r2/A2hglI/Pii+/yTm/oTYCtqvnt51yzJgp6Z8ROmqUyrKYqXkEV9K4voQbg6nzb K3DHuFUEOuep6H6xZWIXjyJ0y5h98frbJoTtrOlxYJxd0GvvPq2WOF64fzFQs0+B gFHgN/TDH+SrqWIVa6MM8v9+RcppP5/OfyZMVC95TVnLpQM/d8FIiyLjF0PbbOEi kfQqWy98Qy/ZBJ+j1lAsi1fX7QM/hrXgE4Vfdv3x1RON3hpT+r8AJAL5SLYtwIRP dp8XSwGkGt5nhA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefledguddvudcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhhrggfgsedtkeertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeduhf ffveektdduhfdutdfgtdekkedvhfetuedufedtgffgvdevleehheevjefgtdenucfkphep uddvrdegiedruddtiedrudeigeenucevlhhushhtvghrufhiiigvpeduvdenucfrrghrrg hmpehmrghilhhfrhhomhepiihirdihrghnsehsvghnthdrtghomh X-ME-Proxy: Received: from nvrsysarch6.NVidia.COM (unknown [12.46.106.164]) by mail.messagingengine.com (Postfix) with ESMTPA id 89BD23060272; Wed, 2 Sep 2020 14:06:34 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org, Roman Gushchin Cc: Rik van Riel , "Kirill A . Shutemov" , Matthew Wilcox , Shakeel Butt , Yang Shi , David Nellans , linux-kernel@vger.kernel.org, Zi Yan Subject: [RFC PATCH 16/16] mm: thp: use cma reservation for pud thp allocation. Date: Wed, 2 Sep 2020 14:06:28 -0400 Message-Id: <20200902180628.4052244-17-zi.yan@sent.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902180628.4052244-1-zi.yan@sent.com> References: <20200902180628.4052244-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Rspamd-Queue-Id: 016FB3D668 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan Sharing hugepage_cma reservation with hugetlb for pud thp allocaiton. The reserved cma regions still can be used for moveable page allocations. During 1GB page split, all subpages are cleared from the CMA bitmap, since they are no more 1GB pages and will be freed via the normal path instead of cma_release(). Signed-off-by: Zi Yan --- include/linux/cma.h | 3 +++ include/linux/huge_mm.h | 10 ++++++++++ mm/cma.c | 31 +++++++++++++++++++++++++++++++ mm/huge_memory.c | 30 ++++++++++++++++++++++++++++++ mm/mempolicy.c | 12 +++++++++--- mm/page_alloc.c | 3 ++- 6 files changed, 85 insertions(+), 4 deletions(-) diff --git a/include/linux/cma.h b/include/linux/cma.h index abcf7ab712f9..b765d19e4052 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -46,6 +46,9 @@ extern struct page *cma_alloc(struct cma *cma, size_t c= ount, unsigned int align, bool no_warn); extern bool cma_release(struct cma *cma, const struct page *pages, unsig= ned int count); =20 +extern bool cma_clear_bitmap_if_in_range(struct cma *cma, const struct p= age *page, + unsigned int count); + extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), voi= d *data); =20 extern void cma_reserve(int min_order, unsigned long requested_size, diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 3bf8d8a09f08..5a45877055bb 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -24,6 +24,8 @@ extern struct page *follow_trans_huge_pud(struct vm_are= a_struct *vma, unsigned long addr, pud_t *pud, unsigned int flags); +extern struct page *alloc_thp_pud_page(int nid); +extern bool free_thp_pud_page(struct page *page, int order); #else static inline void huge_pud_set_accessed(struct vm_fault *vmf, pud_t ori= g_pud) { @@ -43,6 +45,14 @@ struct page *follow_trans_huge_pud(struct vm_area_stru= ct *vma, { return NULL; } +struct page *alloc_thp_pud_page(int nid) +{ + return NULL; +} +extern bool free_thp_pud_page(struct page *page, int order); +{ + return false; +} #endif =20 extern vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_p= md); diff --git a/mm/cma.c b/mm/cma.c index aa3a17d8a191..3f721b8f7ccd 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -532,6 +532,37 @@ bool cma_release(struct cma *cma, const struct page = *pages, unsigned int count) return true; } =20 +/** + * cma_clear_bitmap_if_in_range() - clear bitmap for a given page + * @cma: Contiguous memory region for which the allocation is performe= d. + * @pages: Allocated pages. + * @count: Number of allocated pages. + * + * This function clears bitmap of memory allocated by cma_alloc(). + * It returns false when provided pages do not belong to contiguous area= and + * true otherwise. + */ +bool cma_clear_bitmap_if_in_range(struct cma *cma, const struct page *pa= ges, + unsigned int count) +{ + unsigned long pfn; + + if (!cma || !pages) + return false; + + pfn =3D page_to_pfn(pages); + + if (pfn < cma->base_pfn || pfn >=3D cma->base_pfn + cma->count) + return false; + + if (pfn + count > cma->base_pfn + cma->count) + return false; + + cma_clear_bitmap(cma, pfn, count); + + return true; +} + int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data= ) { int i; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e1440a13da63..2020b843fd97 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -33,6 +33,7 @@ #include #include #include +#include =20 #include #include @@ -64,6 +65,10 @@ static struct shrinker deferred_split_shrinker; static atomic_t huge_zero_refcount; struct page *huge_zero_page __read_mostly; =20 +#ifdef CONFIG_CMA +extern struct cma *hugepage_cma[MAX_NUMNODES]; +#endif + bool transparent_hugepage_enabled(struct vm_area_struct *vma) { /* The addr is used to check if the vma size fits */ @@ -2526,6 +2531,13 @@ static void __split_huge_pud_page(struct page *pag= e, struct list_head *list, /* no file-back page support yet */ VM_BUG_ON(!PageAnon(page)); =20 + /* */ + if (IS_ENABLED(CONFIG_CMA)) { + struct cma *cma =3D hugepage_cma[page_to_nid(head)]; + VM_BUG_ON(!cma_clear_bitmap_if_in_range(cma, head, + thp_nr_pages(head))); + } + for (i =3D HPAGE_PUD_NR - HPAGE_PMD_NR; i >=3D 1; i -=3D HPAGE_PMD_NR) = { __split_huge_pud_page_tail(head, i, lruvec, list); } @@ -3753,3 +3765,21 @@ void remove_migration_pmd(struct page_vma_mapped_w= alk *pvmw, struct page *new) update_mmu_cache_pmd(vma, address, pvmw->pmd); } #endif + +struct page *alloc_thp_pud_page(int nid) +{ + struct page *page =3D NULL; +#ifdef CONFIG_CMA + page =3D cma_alloc(hugepage_cma[nid], HPAGE_PUD_NR, HPAGE_PUD_ORDER, tr= ue); +#endif + return page; +} + +bool free_thp_pud_page(struct page *page, int order) +{ + bool ret =3D false; +#ifdef CONFIG_CMA + ret =3D cma_release(hugepage_cma[page_to_nid(page)], page, 1< MAX_ORDER) { - page =3D alloc_contig_pages(1UL< MAX_ORDER) { - page =3D alloc_contig_pages(1UL<=3D MAX_ORDER) { destroy_compound_gigantic_page(page, order); - free_contig_range(page_to_pfn(page), 1 << order); + if (!free_thp_pud_page(page, order)) + free_contig_range(page_to_pfn(page), 1 << order); } else { migratetype =3D get_pfnblock_migratetype(page, pfn); local_irq_save(flags); --=20 2.28.0