From: "Matthew Wilcox (Oracle)" <willy@infradead.org> To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>, Johannes Weiner <hannes@cmpxchg.org>, Michal Hocko <mhocko@kernel.org>, Vladimir Davydov <vdavydov.dev@gmail.com>, Christoph Hellwig <hch@lst.de> Subject: [PATCH v3 18/18] mm/workingset: Convert workingset_activation to take a folio Date: Wed, 30 Jun 2021 05:00:34 +0100 [thread overview] Message-ID: <20210630040034.1155892-19-willy@infradead.org> (raw) In-Reply-To: <20210630040034.1155892-1-willy@infradead.org> This function already assumed it was being passed a head page. No real change here, except that thp_nr_pages() compiles away on kernels with THP compiled out while folio_nr_pages() is always present. Also convert page_memcg_rcu() to folio_memcg_rcu(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> --- include/linux/memcontrol.h | 18 +++++++++--------- include/linux/swap.h | 2 +- mm/swap.c | 2 +- mm/workingset.c | 10 +++++----- 4 files changed, 16 insertions(+), 16 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e6b5e8fbf770..be131c28b3bc 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -462,19 +462,19 @@ static inline struct mem_cgroup *page_memcg(struct page *page) } /* - * page_memcg_rcu - locklessly get the memory cgroup associated with a page - * @page: a pointer to the page struct + * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio. + * @folio: Pointer to the folio. * - * Returns a pointer to the memory cgroup associated with the page, - * or NULL. This function assumes that the page is known to have a + * Returns a pointer to the memory cgroup associated with the folio, + * or NULL. This function assumes that the folio is known to have a * proper memory cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages. + * against some type of folios, e.g. slab folios or ex-slab folios. */ -static inline struct mem_cgroup *page_memcg_rcu(struct page *page) +static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { - unsigned long memcg_data = READ_ONCE(page->memcg_data); + unsigned long memcg_data = READ_ONCE(folio->memcg_data); - VM_BUG_ON_PAGE(PageSlab(page), page); + VM_BUG_ON_FOLIO(folio_slab(folio), folio); WARN_ON_ONCE(!rcu_read_lock_held()); if (memcg_data & MEMCG_DATA_KMEM) { @@ -1125,7 +1125,7 @@ static inline struct mem_cgroup *page_memcg(struct page *page) return NULL; } -static inline struct mem_cgroup *page_memcg_rcu(struct page *page) +static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { WARN_ON_ONCE(!rcu_read_lock_held()); return NULL; diff --git a/include/linux/swap.h b/include/linux/swap.h index 950dd96007ad..614bbef65777 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -325,7 +325,7 @@ static inline swp_entry_t folio_swap_entry(struct folio *folio) void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages); void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg); void workingset_refault(struct page *page, void *shadow); -void workingset_activation(struct page *page); +void workingset_activation(struct folio *folio); /* Only track the nodes of mappings with shadow entries */ void workingset_update_node(struct xa_node *node); diff --git a/mm/swap.c b/mm/swap.c index 8ba62a930370..3c817717af0c 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -447,7 +447,7 @@ void mark_page_accessed(struct page *page) else __lru_cache_activate_page(page); ClearPageReferenced(page); - workingset_activation(page); + workingset_activation(page_folio(page)); } if (page_is_idle(page)) clear_page_idle(page); diff --git a/mm/workingset.c b/mm/workingset.c index 4f7a306ce75a..86e239ec0306 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -390,9 +390,9 @@ void workingset_refault(struct page *page, void *shadow) /** * workingset_activation - note a page activation - * @page: page that is being activated + * @folio: Folio that is being activated. */ -void workingset_activation(struct page *page) +void workingset_activation(struct folio *folio) { struct mem_cgroup *memcg; struct lruvec *lruvec; @@ -405,11 +405,11 @@ void workingset_activation(struct page *page) * XXX: See workingset_refault() - this should return * root_mem_cgroup even for !CONFIG_MEMCG. */ - memcg = page_memcg_rcu(page); + memcg = folio_memcg_rcu(folio); if (!mem_cgroup_disabled() && !memcg) goto out; - lruvec = mem_cgroup_page_lruvec(page); - workingset_age_nonresident(lruvec, thp_nr_pages(page)); + lruvec = mem_cgroup_folio_lruvec(folio); + workingset_age_nonresident(lruvec, folio_nr_pages(folio)); out: rcu_read_unlock(); } -- 2.30.2
WARNING: multiple messages have this Message-ID (diff)
From: "Matthew Wilcox (Oracle)" <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org> To: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org Cc: "Matthew Wilcox (Oracle)" <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>, Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>, Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>, Vladimir Davydov <vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>, Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org> Subject: [PATCH v3 18/18] mm/workingset: Convert workingset_activation to take a folio Date: Wed, 30 Jun 2021 05:00:34 +0100 [thread overview] Message-ID: <20210630040034.1155892-19-willy@infradead.org> (raw) In-Reply-To: <20210630040034.1155892-1-willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org> This function already assumed it was being passed a head page. No real change here, except that thp_nr_pages() compiles away on kernels with THP compiled out while folio_nr_pages() is always present. Also convert page_memcg_rcu() to folio_memcg_rcu(). Signed-off-by: Matthew Wilcox (Oracle) <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org> Reviewed-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org> --- include/linux/memcontrol.h | 18 +++++++++--------- include/linux/swap.h | 2 +- mm/swap.c | 2 +- mm/workingset.c | 10 +++++----- 4 files changed, 16 insertions(+), 16 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e6b5e8fbf770..be131c28b3bc 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -462,19 +462,19 @@ static inline struct mem_cgroup *page_memcg(struct page *page) } /* - * page_memcg_rcu - locklessly get the memory cgroup associated with a page - * @page: a pointer to the page struct + * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio. + * @folio: Pointer to the folio. * - * Returns a pointer to the memory cgroup associated with the page, - * or NULL. This function assumes that the page is known to have a + * Returns a pointer to the memory cgroup associated with the folio, + * or NULL. This function assumes that the folio is known to have a * proper memory cgroup pointer. It's not safe to call this function - * against some type of pages, e.g. slab pages or ex-slab pages. + * against some type of folios, e.g. slab folios or ex-slab folios. */ -static inline struct mem_cgroup *page_memcg_rcu(struct page *page) +static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { - unsigned long memcg_data = READ_ONCE(page->memcg_data); + unsigned long memcg_data = READ_ONCE(folio->memcg_data); - VM_BUG_ON_PAGE(PageSlab(page), page); + VM_BUG_ON_FOLIO(folio_slab(folio), folio); WARN_ON_ONCE(!rcu_read_lock_held()); if (memcg_data & MEMCG_DATA_KMEM) { @@ -1125,7 +1125,7 @@ static inline struct mem_cgroup *page_memcg(struct page *page) return NULL; } -static inline struct mem_cgroup *page_memcg_rcu(struct page *page) +static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { WARN_ON_ONCE(!rcu_read_lock_held()); return NULL; diff --git a/include/linux/swap.h b/include/linux/swap.h index 950dd96007ad..614bbef65777 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -325,7 +325,7 @@ static inline swp_entry_t folio_swap_entry(struct folio *folio) void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages); void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg); void workingset_refault(struct page *page, void *shadow); -void workingset_activation(struct page *page); +void workingset_activation(struct folio *folio); /* Only track the nodes of mappings with shadow entries */ void workingset_update_node(struct xa_node *node); diff --git a/mm/swap.c b/mm/swap.c index 8ba62a930370..3c817717af0c 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -447,7 +447,7 @@ void mark_page_accessed(struct page *page) else __lru_cache_activate_page(page); ClearPageReferenced(page); - workingset_activation(page); + workingset_activation(page_folio(page)); } if (page_is_idle(page)) clear_page_idle(page); diff --git a/mm/workingset.c b/mm/workingset.c index 4f7a306ce75a..86e239ec0306 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -390,9 +390,9 @@ void workingset_refault(struct page *page, void *shadow) /** * workingset_activation - note a page activation - * @page: page that is being activated + * @folio: Folio that is being activated. */ -void workingset_activation(struct page *page) +void workingset_activation(struct folio *folio) { struct mem_cgroup *memcg; struct lruvec *lruvec; @@ -405,11 +405,11 @@ void workingset_activation(struct page *page) * XXX: See workingset_refault() - this should return * root_mem_cgroup even for !CONFIG_MEMCG. */ - memcg = page_memcg_rcu(page); + memcg = folio_memcg_rcu(folio); if (!mem_cgroup_disabled() && !memcg) goto out; - lruvec = mem_cgroup_page_lruvec(page); - workingset_age_nonresident(lruvec, thp_nr_pages(page)); + lruvec = mem_cgroup_folio_lruvec(folio); + workingset_age_nonresident(lruvec, folio_nr_pages(folio)); out: rcu_read_unlock(); } -- 2.30.2
next prev parent reply other threads:[~2021-06-30 4:13 UTC|newest] Thread overview: 122+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-06-30 4:00 [PATCH v3 00/18] Folio conversion of memcg Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 4:00 ` [PATCH v3 01/18] mm: Add folio_nid() Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-07-01 6:56 ` Christoph Hellwig 2021-07-01 6:56 ` Christoph Hellwig 2021-06-30 4:00 ` [PATCH v3 02/18] mm/memcg: Remove 'page' parameter to mem_cgroup_charge_statistics() Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 14:17 ` Johannes Weiner 2021-06-30 14:17 ` Johannes Weiner 2021-06-30 4:00 ` [PATCH v3 03/18] mm/memcg: Use the node id in mem_cgroup_update_tree() Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 6:55 ` Michal Hocko 2021-06-30 6:55 ` Michal Hocko 2021-06-30 14:18 ` Johannes Weiner 2021-06-30 14:18 ` Johannes Weiner 2021-07-01 6:57 ` Christoph Hellwig 2021-07-01 6:57 ` Christoph Hellwig 2021-06-30 4:00 ` [PATCH v3 04/18] mm/memcg: Remove soft_limit_tree_node() Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 6:56 ` Michal Hocko 2021-06-30 6:56 ` Michal Hocko 2021-06-30 14:19 ` Johannes Weiner 2021-06-30 14:19 ` Johannes Weiner 2021-07-01 7:09 ` Christoph Hellwig 2021-07-01 7:09 ` Christoph Hellwig 2021-06-30 4:00 ` [PATCH v3 05/18] mm/memcg: Convert memcg_check_events to take a node ID Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 6:58 ` Michal Hocko 2021-06-30 6:58 ` Michal Hocko 2021-06-30 6:59 ` Michal Hocko 2021-06-30 6:59 ` Michal Hocko 2021-07-01 7:09 ` Christoph Hellwig 2021-07-01 7:09 ` Christoph Hellwig 2021-06-30 4:00 ` [PATCH v3 06/18] mm/memcg: Add folio_memcg() and related functions Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 6:53 ` kernel test robot 2021-06-30 6:53 ` kernel test robot 2021-06-30 6:53 ` kernel test robot 2021-07-01 7:12 ` Christoph Hellwig 2021-07-01 7:12 ` Christoph Hellwig 2021-06-30 4:00 ` [PATCH v3 07/18] mm/memcg: Convert commit_charge() to take a folio Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 4:00 ` [PATCH v3 08/18] mm/memcg: Convert mem_cgroup_charge() " Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 7:17 ` kernel test robot 2021-06-30 7:17 ` kernel test robot 2021-06-30 7:17 ` kernel test robot 2021-07-01 7:13 ` Christoph Hellwig 2021-07-01 7:13 ` Christoph Hellwig 2021-06-30 4:00 ` [PATCH v3 09/18] mm/memcg: Convert uncharge_page() to uncharge_folio() Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-07-01 7:15 ` Christoph Hellwig 2021-07-01 7:15 ` Christoph Hellwig 2021-06-30 4:00 ` [PATCH v3 10/18] mm/memcg: Convert mem_cgroup_uncharge() to take a folio Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 8:46 ` kernel test robot 2021-06-30 8:46 ` kernel test robot 2021-06-30 8:46 ` kernel test robot 2021-07-01 7:17 ` Christoph Hellwig 2021-07-01 7:17 ` Christoph Hellwig 2021-07-07 12:09 ` Matthew Wilcox 2021-07-07 12:09 ` Matthew Wilcox 2021-06-30 4:00 ` [PATCH v3 11/18] mm/memcg: Convert mem_cgroup_migrate() to take folios Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-07-01 7:20 ` Christoph Hellwig 2021-07-01 7:20 ` Christoph Hellwig 2021-06-30 4:00 ` [PATCH v3 12/18] mm/memcg: Convert mem_cgroup_track_foreign_dirty_slowpath() to folio Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 4:00 ` [PATCH v3 13/18] mm/memcg: Add folio_memcg_lock() and folio_memcg_unlock() Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 8:32 ` Michal Hocko 2021-06-30 8:32 ` Michal Hocko 2021-07-07 15:10 ` Matthew Wilcox 2021-07-07 15:10 ` Matthew Wilcox 2021-07-08 7:28 ` Michal Hocko 2021-07-08 7:28 ` Michal Hocko 2021-07-07 17:08 ` Johannes Weiner 2021-07-07 17:08 ` Johannes Weiner 2021-07-07 19:28 ` Matthew Wilcox 2021-07-07 19:28 ` Matthew Wilcox 2021-07-07 20:41 ` Johannes Weiner 2021-07-07 20:41 ` Johannes Weiner 2021-07-09 19:37 ` Matthew Wilcox 2021-07-09 19:37 ` Matthew Wilcox 2021-06-30 4:00 ` [PATCH v3 14/18] mm/memcg: Convert mem_cgroup_move_account() to use a folio Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 8:30 ` Michal Hocko 2021-06-30 8:30 ` Michal Hocko 2021-06-30 11:22 ` Matthew Wilcox 2021-06-30 11:22 ` Matthew Wilcox 2021-06-30 12:20 ` Michal Hocko 2021-06-30 12:20 ` Michal Hocko 2021-06-30 12:31 ` Matthew Wilcox 2021-06-30 12:31 ` Matthew Wilcox 2021-06-30 12:45 ` Michal Hocko 2021-06-30 12:45 ` Michal Hocko 2021-07-07 15:25 ` Matthew Wilcox 2021-07-07 15:25 ` Matthew Wilcox 2021-07-08 7:30 ` Michal Hocko 2021-07-08 7:30 ` Michal Hocko 2021-06-30 4:00 ` [PATCH v3 15/18] mm/memcg: Add mem_cgroup_folio_lruvec() Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 8:12 ` kernel test robot 2021-06-30 8:12 ` kernel test robot 2021-06-30 8:12 ` kernel test robot 2021-06-30 19:18 ` Matthew Wilcox 2021-06-30 19:18 ` Matthew Wilcox 2021-06-30 21:21 ` Johannes Weiner 2021-06-30 21:21 ` Johannes Weiner 2021-06-30 4:00 ` [PATCH v3 16/18] mm/memcg: Add folio_lruvec_lock() and similar functions Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 8:36 ` Michal Hocko 2021-06-30 8:36 ` Michal Hocko 2021-06-30 4:00 ` [PATCH v3 17/18] mm/memcg: Add folio_lruvec_relock_irq() and folio_lruvec_relock_irqsave() Matthew Wilcox (Oracle) 2021-06-30 4:00 ` Matthew Wilcox (Oracle) 2021-06-30 8:39 ` Michal Hocko 2021-06-30 8:39 ` Michal Hocko 2021-06-30 4:00 ` Matthew Wilcox (Oracle) [this message] 2021-06-30 4:00 ` [PATCH v3 18/18] mm/workingset: Convert workingset_activation to take a folio Matthew Wilcox (Oracle) 2021-06-30 8:44 ` [PATCH v3 00/18] Folio conversion of memcg Michal Hocko 2021-06-30 8:44 ` Michal Hocko
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210630040034.1155892-19-willy@infradead.org \ --to=willy@infradead.org \ --cc=cgroups@vger.kernel.org \ --cc=hannes@cmpxchg.org \ --cc=hch@lst.de \ --cc=linux-mm@kvack.org \ --cc=mhocko@kernel.org \ --cc=vdavydov.dev@gmail.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.