From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85A1AC433FE for ; Fri, 4 Feb 2022 20:00:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B230A6B009B; Fri, 4 Feb 2022 14:59:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B73488D000D; Fri, 4 Feb 2022 14:59:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5AB78D0011; Fri, 4 Feb 2022 14:59:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 4D6DA8D0010 for ; Fri, 4 Feb 2022 14:59:09 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 04538181E5176 for ; Fri, 4 Feb 2022 19:59:09 +0000 (UTC) X-FDA: 79106161218.11.E9F5042 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id A3F6340002 for ; Fri, 4 Feb 2022 19:59:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=SyU2ekv64a6Ug5ueH4gEC0ZqdQjiGm8CPqA8DmfHZ40=; b=LxsQS5a/zCubbdsHeLJ3dllqE7 v5a1vgizqkjOAOoxUc9CoDwb+eQ42LuzNXZFBP/ejQDaZuStTBpbAO6lwF8lpssVZISs/+FYWWbkt IzkaDKAHrLfIwks57eRhRi295hFsVS/gYfgXEa64vLHN8cPfJYoI6CM+3Qqg1T6AHevUMlmcTEmXF JR/JTccSlmGgp9KIJN1Hy+3FQZ9eGuXyfHll1fjDPStM3+w0VdGNJbDUeR77vsCHQ+nsnV61qWSml GW+DSQQxKABcMVyOXWWYcsujMxt5x3wNw97fF7Q5eLn0n64Y7nYksEedFea571/H+PTBgZTQ4FDcr IlyWG6YQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nG4jb-007Lqg-9R; Fri, 04 Feb 2022 19:59:07 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH 66/75] mm: Turn can_split_huge_page() into can_split_folio() Date: Fri, 4 Feb 2022 19:58:43 +0000 Message-Id: <20220204195852.1751729-67-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220204195852.1751729-1-willy@infradead.org> References: <20220204195852.1751729-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: A3F6340002 X-Stat-Signature: m3pxtgcizx6a6sjfjm5fkkwxs6nyugaz Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="LxsQS5a/"; dmarc=none; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: nil X-HE-Tag: 1644004748-722878 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function already required a head page to be passed, so this just adds type-safety and removes a few implicit calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/huge_mm.h | 4 ++-- mm/huge_memory.c | 15 ++++++++------- mm/vmscan.c | 6 +++--- 3 files changed, 13 insertions(+), 12 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 4368b314d9c8..e0348bca3d66 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -185,7 +185,7 @@ void prep_transhuge_page(struct page *page); void free_transhuge_page(struct page *page); bool is_transparent_hugepage(struct page *page); =20 -bool can_split_huge_page(struct page *page, int *pextra_pins); +bool can_split_folio(struct folio *folio, int *pextra_pins); int split_huge_page_to_list(struct page *page, struct list_head *list); static inline int split_huge_page(struct page *page) { @@ -387,7 +387,7 @@ static inline bool is_transparent_hugepage(struct pag= e *page) #define thp_get_unmapped_area NULL =20 static inline bool -can_split_huge_page(struct page *page, int *pextra_pins) +can_split_folio(struct folio *folio, int *pextra_pins) { BUILD_BUG(); return false; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f711dabc9c62..a80d0408ebf4 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2545,18 +2545,19 @@ int page_trans_huge_mapcount(struct page *page) } =20 /* Racy check whether the huge page can be split */ -bool can_split_huge_page(struct page *page, int *pextra_pins) +bool can_split_folio(struct folio *folio, int *pextra_pins) { int extra_pins; =20 /* Additional pins from page cache */ - if (PageAnon(page)) - extra_pins =3D PageSwapCache(page) ? thp_nr_pages(page) : 0; + if (folio_test_anon(folio)) + extra_pins =3D folio_test_swapcache(folio) ? + folio_nr_pages(folio) : 0; else - extra_pins =3D thp_nr_pages(page); + extra_pins =3D folio_nr_pages(folio); if (pextra_pins) *pextra_pins =3D extra_pins; - return total_mapcount(page) =3D=3D page_count(page) - extra_pins - 1; + return folio_mapcount(folio) =3D=3D folio_ref_count(folio) - extra_pins= - 1; } =20 /* @@ -2648,7 +2649,7 @@ int split_huge_page_to_list(struct page *page, stru= ct list_head *list) * Racy check if we can split the page, before unmap_page() will * split PMDs */ - if (!can_split_huge_page(head, &extra_pins)) { + if (!can_split_folio(folio, &extra_pins)) { ret =3D -EBUSY; goto out_unlock; } @@ -2957,7 +2958,7 @@ static int split_huge_pages_pid(int pid, unsigned l= ong vaddr_start, goto next; =20 total++; - if (!can_split_huge_page(compound_head(page), NULL)) + if (!can_split_folio(page_folio(page), NULL)) goto next; =20 if (!trylock_page(page)) diff --git a/mm/vmscan.c b/mm/vmscan.c index efe041c2859d..6d2e4da77392 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1696,18 +1696,18 @@ static unsigned int shrink_page_list(struct list_= head *page_list, if (!PageSwapCache(page)) { if (!(sc->gfp_mask & __GFP_IO)) goto keep_locked; - if (page_maybe_dma_pinned(page)) + if (folio_maybe_dma_pinned(folio)) goto keep_locked; if (PageTransHuge(page)) { /* cannot split THP, skip it */ - if (!can_split_huge_page(page, NULL)) + if (!can_split_folio(folio, NULL)) goto activate_locked; /* * Split pages without a PMD map right * away. Chances are some or all of the * tail pages can be freed without IO. */ - if (!compound_mapcount(page) && + if (!folio_entire_mapcount(folio) && split_folio_to_list(folio, page_list)) goto activate_locked; --=20 2.34.1