From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C54B5C6FD1F for ; Tue, 26 Mar 2024 17:11:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E73B6B0083; Tue, 26 Mar 2024 13:11:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 172D76B0087; Tue, 26 Mar 2024 13:11:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F2E636B00A4; Tue, 26 Mar 2024 13:11:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id DDB856B0087 for ; Tue, 26 Mar 2024 13:11:15 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 7A677A0CFB for ; Tue, 26 Mar 2024 17:11:15 +0000 (UTC) X-FDA: 81939830910.06.722173F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id C519F16001B for ; Tue, 26 Mar 2024 17:11:13 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DnCFgmEu; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711473073; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=s6CgZRK52RSScg8qoFY1i6MSuCAvafAEyBPlFSpqJ60=; b=mlfh56p+YP7UwGUxh3I9BpplPwDbVgPVi5YIRE4qeWhxC1j45T3MHbwAUSSVr8oofWr9lN s5ytYKQCEtVaqRrzPsFq6j33Bgc+fsGXHzV55nQUuwDNb5+Oiu5eL9PAWl9R2TAmrYpKJB n89Xi26FqTtWdEA5bkHzePQYKTeIOcU= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DnCFgmEu; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711473073; a=rsa-sha256; cv=none; b=dkqVoWqPpP4HbzC0Gv7V6RwKa7oC+lvntPO/1Z0WO497BLZfp3uRYoNSLT+/cs3NWzbpRc cOAJQ0xCXjqrAgdvcb/DYYpf4F96dC0Uvg/yU24cI2QJqLIOYs6a5FLnjXLwnShtmUdsjY R1fxBBFeXxlO2oapaIw5XdyflIjdho0= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=s6CgZRK52RSScg8qoFY1i6MSuCAvafAEyBPlFSpqJ60=; b=DnCFgmEuxe3nMqLrQTU3xgs0By QZiQKXyzItHcRJcOLfwjcXEYOQ4jr0nBOzZIgb0L6fnEZY4exXFODwUERzgwM+3ItGTwjAyHEpTXA Z+lyyr4XIzwLfUz5MF1wAmq+qX4lyBkby+pfKVNpOd68FrCZqItszKGX011oK1cBnxCuLdXU2pzay KQHhLtDJiH1UfRaslBGUfbXQVu57Y1QbprwhqYHtF4FdDN/pkrkrFooRL5RaioKw017krRwOhknNy k4S4KO6nQajvWATjnl5C0ztBWp9vARpJ+oVQ0/WTUn9rBaRHQDWPYxAG7CI5hSkOFMPT9xPRZ1+cd 5KidapWw==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpAK0-00000001isl-3U6Y; Tue, 26 Mar 2024 17:10:48 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, Svetly Todorov Subject: [PATCH 10/10] proc: Rewrite stable_page_flags() Date: Tue, 26 Mar 2024 17:10:32 +0000 Message-ID: <20240326171045.410737-11-willy@infradead.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240326171045.410737-1-willy@infradead.org> References: <20240326171045.410737-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: C519F16001B X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: zwkgzy4bzaa4eg59qojbutngdkxq6xqc X-HE-Tag: 1711473073-40382 X-HE-Meta: U2FsdGVkX19A65O4FPA6DnOqZfxGfGg4pjjSIVuoCgkoFJvtbFZ1Q6CXRTdasWxsxLHnNm/OB1jqdG+BKXvN4eJ2b+ibzsQiHgetkELCvLLkSP1v4H6zQAvfW/S08rXqIsRygXtri4LdOmKfP5SY+eQq4QfDdZ0ag+51ocALiqoVO7jepYayZtr85du7duSX8xBLdXEUbNXbg/815j5UFxpCdnuAjRg361IK8Wg/GCxzBu0uzbKLQkADAFzy3wCq7QRZFGJla6NmKlFMaDdI0QeYByMEuSNdgMRZW3R9AkX8y9NkCXvXUf6S5UDTvrzYA8Vp0jdh2Ox3DaKpTd5JH/oGdGUatuehdz894DBYa7wiC6gIhFQ95w91/TRXDfreiSo2b/gx/uLXjYNpqmwnZm7yR3IP3+snlj7JUSYHxsN7xMwsK+o0waqieOKqI2eGZcnNKfXg6nhMsq6hj+sSft6SNfIl1lkWFS9oA7bY5OcOFtrUAl7I7SZRa4CQ8+/tnxT/vmaEKequzX3hI7SGuFrB1znU0gOaUghxx9mYB99h1IRFOaw1cnHpdpjKVd/ypuO7jjnln3VKM65DxmhN92HqZDO+ob6fwMV9f9J08sTXx0sv9qK0Wq4bLaJlWKJZYDSJtxuWGTZ4pcDPiOSUN9a9ohRT+JZXog0QfKT5mWZqjRV4c/qXUfBMpyGdGSrjE+az9j3Id12rQq1DDpmu7pdUmgXSHc6JtXPgxxzpCLB/nB0iUaB2Wmyh5Ws7AYW1uZmGmhKC9OefmQ1i2apKsUY4L6cqcR+2ge9pGAdkgW81WGxhwuG2aH+Mr44vBKtVHplQcSn9bkVHjuEd2gluMoFD30/465olB1gNYpLWdF1+RSh/4NUniqIE2Iua77CpeJMoe5YBSz53pEZPF1O8kX+pt8BOtPL+mBIDHNNbzLP7/HxkP0doiMbLZw5xRGL7Nn72xGSxyjejTvrMhsH 8mPggy1j 2ms6HORwyIEGO1+fZEhGkNBaq5GbPHJcie7F/WZCUxxzLodhTU5kP9G/e8IGAQ/NRxv2eyxIba77JzxAIvIOV5l58ynTf24UXNxh8m0Nw9ACOPcMhBxPPvOyX7/zfw18PzHuhr+YpFLn/5h54oVPUzDbtNf63VoeMCeCkYFeWNp2XqiDzdMfK2HuUo1hAlA/KRjpQugiohRINcCf2MdZ4fDy9dxQ8+rJskS9BQw0UFMkCRGcNrmNWZitinbkPScRjA3hucjiTSWPwh51qN/6VN51Vk3CwEpSzNXIw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Reduce the usage of PageFlag tests and reduce the number of compound_head() calls. For multi-page folios, we'll now show all pages as having the flags that apply to them, e.g. if it's dirty, all pages will have the dirty flag set instead of just the head page. The mapped flag is still per page, as is the hwpoison flag. Signed-off-by: Matthew Wilcox (Oracle) Cc: Svetly Todorov --- fs/proc/page.c | 66 ++++++++++++++++++++------------------ include/linux/huge_mm.h | 4 +-- include/linux/page-flags.h | 2 +- 3 files changed, 38 insertions(+), 34 deletions(-) diff --git a/fs/proc/page.c b/fs/proc/page.c index 9223856c934b..d6953f95e3b4 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -107,10 +107,13 @@ static inline u64 kpf_copy_bit(u64 kflags, int ubit, int kbit) return ((kflags >> kbit) & 1) << ubit; } -u64 stable_page_flags(struct page *page) +u64 stable_page_flags(const struct page *page) { - u64 k; - u64 u; + const struct folio *folio; + unsigned long k; + unsigned long mapping; + bool is_anon; + u64 u = 0; /* * pseudo flag: KPF_NOPAGE @@ -118,49 +121,47 @@ u64 stable_page_flags(struct page *page) */ if (!page) return 1 << KPF_NOPAGE; + folio = page_folio(page); - k = page->flags; - u = 0; + k = folio->flags; + mapping = (unsigned long)folio->mapping; + is_anon = mapping & PAGE_MAPPING_ANON; /* * pseudo flags for the well known (anonymous) memory mapped pages */ if (page_mapped(page)) u |= 1 << KPF_MMAP; - if (PageAnon(page)) + if (is_anon) { u |= 1 << KPF_ANON; - if (PageKsm(page)) - u |= 1 << KPF_KSM; + if (mapping & PAGE_MAPPING_KSM) + u |= 1 << KPF_KSM; + } /* * compound pages: export both head/tail info * they together define a compound page's start/end pos and order */ - if (PageHead(page)) - u |= 1 << KPF_COMPOUND_HEAD; - if (PageTail(page)) + if (page == &folio->page) + u |= kpf_copy_bit(k, KPF_COMPOUND_HEAD, PG_head); + else u |= 1 << KPF_COMPOUND_TAIL; - if (PageHuge(page)) + if (folio_test_hugetlb(folio)) u |= 1 << KPF_HUGE; /* - * PageTransCompound can be true for non-huge compound pages (slab - * pages or pages allocated by drivers with __GFP_COMP) because it - * just checks PG_head/PG_tail, so we need to check PageLRU/PageAnon + * We need to check PageLRU/PageAnon * to make sure a given page is a thp, not a non-huge compound page. */ - else if (PageTransCompound(page)) { - struct page *head = compound_head(page); - - if (PageLRU(head) || PageAnon(head)) + else if (folio_test_large(folio)) { + if ((k & PG_lru) || is_anon) u |= 1 << KPF_THP; - else if (is_huge_zero_page(head)) { + else if (is_huge_zero_page(&folio->page)) { u |= 1 << KPF_ZERO_PAGE; u |= 1 << KPF_THP; } } else if (is_zero_pfn(page_to_pfn(page))) u |= 1 << KPF_ZERO_PAGE; - /* * Caveats on high order pages: PG_buddy and PG_slab will only be set * on the head page. @@ -175,15 +176,15 @@ u64 stable_page_flags(struct page *page) if (PageTable(page)) u |= 1 << KPF_PGTABLE; - if (page_is_idle(page)) +#if defined(CONFIG_PAGE_IDLE_FLAG) && defined(CONFIG_64BIT) + u |= kpf_copy_bit(k, KPF_IDLE, PG_idle); +#else + if (folio_test_idle(folio)) u |= 1 << KPF_IDLE; +#endif u |= kpf_copy_bit(k, KPF_LOCKED, PG_locked); - u |= kpf_copy_bit(k, KPF_SLAB, PG_slab); - if (PageTail(page) && PageSlab(page)) - u |= 1 << KPF_SLAB; - u |= kpf_copy_bit(k, KPF_ERROR, PG_error); u |= kpf_copy_bit(k, KPF_DIRTY, PG_dirty); u |= kpf_copy_bit(k, KPF_UPTODATE, PG_uptodate); @@ -194,7 +195,8 @@ u64 stable_page_flags(struct page *page) u |= kpf_copy_bit(k, KPF_ACTIVE, PG_active); u |= kpf_copy_bit(k, KPF_RECLAIM, PG_reclaim); - if (PageSwapCache(page)) +#define SWAPCACHE ((1 << PG_swapbacked) | (1 << PG_swapcache)) + if ((k & SWAPCACHE) == SWAPCACHE) u |= 1 << KPF_SWAPCACHE; u |= kpf_copy_bit(k, KPF_SWAPBACKED, PG_swapbacked); @@ -202,7 +204,10 @@ u64 stable_page_flags(struct page *page) u |= kpf_copy_bit(k, KPF_MLOCKED, PG_mlocked); #ifdef CONFIG_MEMORY_FAILURE - u |= kpf_copy_bit(k, KPF_HWPOISON, PG_hwpoison); + if (u & KPF_HUGE) + u |= kpf_copy_bit(k, KPF_HWPOISON, PG_hwpoison); + else + u |= kpf_copy_bit(page->flags, KPF_HWPOISON, PG_hwpoison); #endif #ifdef CONFIG_ARCH_USES_PG_UNCACHED @@ -228,7 +233,6 @@ static ssize_t kpageflags_read(struct file *file, char __user *buf, { const unsigned long max_dump_pfn = get_max_dump_pfn(); u64 __user *out = (u64 __user *)buf; - struct page *ppage; unsigned long src = *ppos; unsigned long pfn; ssize_t ret = 0; @@ -245,9 +249,9 @@ static ssize_t kpageflags_read(struct file *file, char __user *buf, * TODO: ZONE_DEVICE support requires to identify * memmaps that were actually initialized. */ - ppage = pfn_to_online_page(pfn); + struct page *page = pfn_to_online_page(pfn); - if (put_user(stable_page_flags(ppage), out)) { + if (put_user(stable_page_flags(page), out)) { ret = -EFAULT; break; } diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 7576025db55d..1540a1481daf 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -351,7 +351,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf); extern struct page *huge_zero_page; extern unsigned long huge_zero_pfn; -static inline bool is_huge_zero_page(struct page *page) +static inline bool is_huge_zero_page(const struct page *page) { return READ_ONCE(huge_zero_page) == page; } @@ -480,7 +480,7 @@ static inline vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) return 0; } -static inline bool is_huge_zero_page(struct page *page) +static inline bool is_huge_zero_page(const struct page *page) { return false; } diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index eaecf544039f..888353c209c0 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -734,7 +734,7 @@ static __always_inline bool PageKsm(const struct page *page) TESTPAGEFLAG_FALSE(Ksm, ksm) #endif -u64 stable_page_flags(struct page *page); +u64 stable_page_flags(const struct page *page); /** * folio_xor_flags_has_waiters - Change some folio flags. -- 2.43.0