linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9] convert hugetlb_cgroup helper functions to folios
@ 2022-10-14  3:12 Sidhartha Kumar
  2022-10-14  3:12 ` [PATCH 1/9] mm/hugetlb_cgroup: convert __set_hugetlb_cgroup() " Sidhartha Kumar
                   ` (8 more replies)
  0 siblings, 9 replies; 22+ messages in thread
From: Sidhartha Kumar @ 2022-10-14  3:12 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, almasrymina, linmiaohe,
	minhquangbui99, aneesh.kumar, Sidhartha Kumar

This patch series continues the conversion of hugetlb code from being
managed in pages to folios by converting many of the hugetlb_cgroup helper
functions to use folios. This allows the core hugetlb functions to pass in
a folio to these helper functions.

This series depends on my previous patch series which begins the hugetlb
folio conversion[1], both series apply cleanly on next-20221013 and
pass the LTP hugetlb test cases.


[1] https://lore.kernel.org/lkml/20220922154207.1575343-1-sidhartha.kumar@oracle.com/

Sidhartha Kumar (9):
  mm/hugetlb_cgroup: convert __set_hugetlb_cgroup() to folios
  mm/hugetlb_cgroup: convert hugetlb_cgroup_from_page() to folios
  mm/hugetlb_cgroup: convert set_hugetlb_cgroup*() to folios
  mm/hugetlb_cgroup: convert hugetlb_cgroup_migrate to folios
  mm/hugetlb: convert isolate_or_dissolve_huge_page to folios
  mm/hugetlb: convert free_huge_page to folios
  mm/hugetlb_cgroup: convert hugetlb_cgroup_uncharge_page() to folios
  mm/hugeltb_cgroup: convert hugetlb_cgroup_commit_charge*() to folios
  mm/hugetlb: convert move_hugetlb_state() to folios

 include/linux/hugetlb.h        |   6 +-
 include/linux/hugetlb_cgroup.h |  85 ++++++++++++------------
 mm/hugetlb.c                   | 115 ++++++++++++++++++---------------
 mm/hugetlb_cgroup.c            |  63 +++++++++---------
 mm/migrate.c                   |   2 +-
 5 files changed, 144 insertions(+), 127 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/9] mm/hugetlb_cgroup: convert __set_hugetlb_cgroup() to folios
  2022-10-14  3:12 [PATCH 0/9] convert hugetlb_cgroup helper functions to folios Sidhartha Kumar
@ 2022-10-14  3:12 ` Sidhartha Kumar
  2022-10-31 14:51   ` Mike Kravetz
  2022-10-14  3:12 ` [PATCH 2/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_from_page() " Sidhartha Kumar
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Sidhartha Kumar @ 2022-10-14  3:12 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, almasrymina, linmiaohe,
	minhquangbui99, aneesh.kumar, Sidhartha Kumar

Change __set_hugetlb_cgroup() to use folios so it is explicit that the
function operates on a head page.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 include/linux/hugetlb_cgroup.h | 14 +++++++-------
 mm/hugetlb_cgroup.c            |  4 ++--
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index 630cd255d0cf..7576e9ed8afe 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -90,31 +90,31 @@ hugetlb_cgroup_from_page_rsvd(struct page *page)
 	return __hugetlb_cgroup_from_page(page, true);
 }
 
-static inline void __set_hugetlb_cgroup(struct page *page,
+static inline void __set_hugetlb_cgroup(struct folio *folio,
 				       struct hugetlb_cgroup *h_cg, bool rsvd)
 {
-	VM_BUG_ON_PAGE(!PageHuge(page), page);
+	VM_BUG_ON_FOLIO(!folio_test_hugetlb(folio), folio);
 
-	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
+	if (folio_order(folio) < HUGETLB_CGROUP_MIN_ORDER)
 		return;
 	if (rsvd)
-		set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD,
+		set_page_private(folio_page(folio, SUBPAGE_INDEX_CGROUP_RSVD),
 				 (unsigned long)h_cg);
 	else
-		set_page_private(page + SUBPAGE_INDEX_CGROUP,
+		set_page_private(folio_page(folio, SUBPAGE_INDEX_CGROUP),
 				 (unsigned long)h_cg);
 }
 
 static inline void set_hugetlb_cgroup(struct page *page,
 				     struct hugetlb_cgroup *h_cg)
 {
-	__set_hugetlb_cgroup(page, h_cg, false);
+	__set_hugetlb_cgroup(page_folio(page), h_cg, false);
 }
 
 static inline void set_hugetlb_cgroup_rsvd(struct page *page,
 					  struct hugetlb_cgroup *h_cg)
 {
-	__set_hugetlb_cgroup(page, h_cg, true);
+	__set_hugetlb_cgroup(page_folio(page), h_cg, true);
 }
 
 static inline bool hugetlb_cgroup_disabled(void)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index c86691c431fd..81675f8f44e9 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -317,7 +317,7 @@ static void __hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
 	if (hugetlb_cgroup_disabled() || !h_cg)
 		return;
 
-	__set_hugetlb_cgroup(page, h_cg, rsvd);
+	__set_hugetlb_cgroup(page_folio(page), h_cg, rsvd);
 	if (!rsvd) {
 		unsigned long usage =
 			h_cg->nodeinfo[page_to_nid(page)]->usage[idx];
@@ -359,7 +359,7 @@ static void __hugetlb_cgroup_uncharge_page(int idx, unsigned long nr_pages,
 	h_cg = __hugetlb_cgroup_from_page(page, rsvd);
 	if (unlikely(!h_cg))
 		return;
-	__set_hugetlb_cgroup(page, NULL, rsvd);
+	__set_hugetlb_cgroup(page_folio(page), NULL, rsvd);
 
 	page_counter_uncharge(__hugetlb_cgroup_counter_from_cgroup(h_cg, idx,
 								   rsvd),
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_from_page() to folios
  2022-10-14  3:12 [PATCH 0/9] convert hugetlb_cgroup helper functions to folios Sidhartha Kumar
  2022-10-14  3:12 ` [PATCH 1/9] mm/hugetlb_cgroup: convert __set_hugetlb_cgroup() " Sidhartha Kumar
@ 2022-10-14  3:12 ` Sidhartha Kumar
  2022-10-31 16:13   ` Mike Kravetz
  2022-10-14  3:12 ` [PATCH 3/9] mm/hugetlb_cgroup: convert set_hugetlb_cgroup*() " Sidhartha Kumar
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Sidhartha Kumar @ 2022-10-14  3:12 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, almasrymina, linmiaohe,
	minhquangbui99, aneesh.kumar, Sidhartha Kumar

Introduce folios in __remove_hugetlb_page() by converting
hugetlb_cgroup_from_page() to use folios.

Also gets rid of unsed hugetlb_cgroup_from_page_resv() function.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 include/linux/hugetlb_cgroup.h | 39 +++++++++++++++++-----------------
 mm/hugetlb.c                   |  5 +++--
 mm/hugetlb_cgroup.c            | 13 +++++++-----
 3 files changed, 31 insertions(+), 26 deletions(-)

diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index 7576e9ed8afe..feb2edafc8b6 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -67,27 +67,34 @@ struct hugetlb_cgroup {
 };
 
 static inline struct hugetlb_cgroup *
-__hugetlb_cgroup_from_page(struct page *page, bool rsvd)
+__hugetlb_cgroup_from_folio(struct folio *folio, bool rsvd)
 {
-	VM_BUG_ON_PAGE(!PageHuge(page), page);
+	struct page *tail;
 
-	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
+	VM_BUG_ON_FOLIO(!folio_test_hugetlb(folio), folio);
+	if (folio_order(folio) < HUGETLB_CGROUP_MIN_ORDER)
 		return NULL;
-	if (rsvd)
-		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD);
-	else
-		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP);
+
+	if (rsvd) {
+		tail = folio_page(folio, SUBPAGE_INDEX_CGROUP_RSVD);
+		return (void *)page_private(tail);
+	}
+
+	else {
+		tail = folio_page(folio, SUBPAGE_INDEX_CGROUP);
+		return (void *)page_private(tail);
+	}
 }
 
-static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
+static inline struct hugetlb_cgroup *hugetlb_cgroup_from_folio(struct folio *folio)
 {
-	return __hugetlb_cgroup_from_page(page, false);
+	return __hugetlb_cgroup_from_folio(folio, false);
 }
 
 static inline struct hugetlb_cgroup *
-hugetlb_cgroup_from_page_rsvd(struct page *page)
+hugetlb_cgroup_from_folio_rsvd(struct folio *folio)
 {
-	return __hugetlb_cgroup_from_page(page, true);
+	return __hugetlb_cgroup_from_folio(folio, true);
 }
 
 static inline void __set_hugetlb_cgroup(struct folio *folio,
@@ -181,19 +188,13 @@ static inline void hugetlb_cgroup_uncharge_file_region(struct resv_map *resv,
 {
 }
 
-static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
-{
-	return NULL;
-}
-
-static inline struct hugetlb_cgroup *
-hugetlb_cgroup_from_page_resv(struct page *page)
+static inline struct hugetlb_cgroup *hugetlb_cgroup_from_folio(struct folio *folio)
 {
 	return NULL;
 }
 
 static inline struct hugetlb_cgroup *
-hugetlb_cgroup_from_page_rsvd(struct page *page)
+hugetlb_cgroup_from_folio_rsvd(struct folio *folio)
 {
 	return NULL;
 }
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 4133ffbbeb50..bcb9bfce32ee 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1434,9 +1434,10 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page,
 							bool demote)
 {
 	int nid = page_to_nid(page);
+	struct folio *folio = page_folio(page);
 
-	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
-	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
+	VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio(folio), folio);
+	VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio_rsvd(folio), folio);
 
 	lockdep_assert_held(&hugetlb_lock);
 	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 81675f8f44e9..600c98560a0f 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -191,8 +191,9 @@ static void hugetlb_cgroup_move_parent(int idx, struct hugetlb_cgroup *h_cg,
 	struct page_counter *counter;
 	struct hugetlb_cgroup *page_hcg;
 	struct hugetlb_cgroup *parent = parent_hugetlb_cgroup(h_cg);
+	struct folio *folio = page_folio(page);
 
-	page_hcg = hugetlb_cgroup_from_page(page);
+	page_hcg = hugetlb_cgroup_from_folio(folio);
 	/*
 	 * We can have pages in active list without any cgroup
 	 * ie, hugepage with less than 3 pages. We can safely
@@ -352,14 +353,15 @@ static void __hugetlb_cgroup_uncharge_page(int idx, unsigned long nr_pages,
 					   struct page *page, bool rsvd)
 {
 	struct hugetlb_cgroup *h_cg;
+	struct folio *folio = page_folio(page);
 
 	if (hugetlb_cgroup_disabled())
 		return;
 	lockdep_assert_held(&hugetlb_lock);
-	h_cg = __hugetlb_cgroup_from_page(page, rsvd);
+	h_cg = __hugetlb_cgroup_from_folio(folio, rsvd);
 	if (unlikely(!h_cg))
 		return;
-	__set_hugetlb_cgroup(page_folio(page), NULL, rsvd);
+	__set_hugetlb_cgroup(folio, NULL, rsvd);
 
 	page_counter_uncharge(__hugetlb_cgroup_counter_from_cgroup(h_cg, idx,
 								   rsvd),
@@ -891,13 +893,14 @@ void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
 	struct hugetlb_cgroup *h_cg;
 	struct hugetlb_cgroup *h_cg_rsvd;
 	struct hstate *h = page_hstate(oldhpage);
+	struct folio *old_folio = page_folio(oldhpage);
 
 	if (hugetlb_cgroup_disabled())
 		return;
 
 	spin_lock_irq(&hugetlb_lock);
-	h_cg = hugetlb_cgroup_from_page(oldhpage);
-	h_cg_rsvd = hugetlb_cgroup_from_page_rsvd(oldhpage);
+	h_cg = hugetlb_cgroup_from_folio(old_folio);
+	h_cg_rsvd = hugetlb_cgroup_from_folio_rsvd(old_folio);
 	set_hugetlb_cgroup(oldhpage, NULL);
 	set_hugetlb_cgroup_rsvd(oldhpage, NULL);
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/9] mm/hugetlb_cgroup: convert set_hugetlb_cgroup*() to folios
  2022-10-14  3:12 [PATCH 0/9] convert hugetlb_cgroup helper functions to folios Sidhartha Kumar
  2022-10-14  3:12 ` [PATCH 1/9] mm/hugetlb_cgroup: convert __set_hugetlb_cgroup() " Sidhartha Kumar
  2022-10-14  3:12 ` [PATCH 2/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_from_page() " Sidhartha Kumar
@ 2022-10-14  3:12 ` Sidhartha Kumar
  2022-10-31 16:38   ` Mike Kravetz
  2022-10-14  3:12 ` [PATCH 4/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_migrate " Sidhartha Kumar
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Sidhartha Kumar @ 2022-10-14  3:12 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, almasrymina, linmiaohe,
	minhquangbui99, aneesh.kumar, Sidhartha Kumar

Allows __prep_new_huge_page() to operate on a folio by converting
set_hugetlb_cgroup*() to take in a folio.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 include/linux/hugetlb_cgroup.h | 12 ++++++------
 mm/hugetlb.c                   | 33 +++++++++++++++++++--------------
 mm/hugetlb_cgroup.c            | 11 ++++++-----
 3 files changed, 31 insertions(+), 25 deletions(-)

diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index feb2edafc8b6..a7e3540f7f38 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -112,16 +112,16 @@ static inline void __set_hugetlb_cgroup(struct folio *folio,
 				 (unsigned long)h_cg);
 }
 
-static inline void set_hugetlb_cgroup(struct page *page,
+static inline void set_hugetlb_cgroup(struct folio *folio,
 				     struct hugetlb_cgroup *h_cg)
 {
-	__set_hugetlb_cgroup(page_folio(page), h_cg, false);
+	__set_hugetlb_cgroup(folio, h_cg, false);
 }
 
-static inline void set_hugetlb_cgroup_rsvd(struct page *page,
+static inline void set_hugetlb_cgroup_rsvd(struct folio *folio,
 					  struct hugetlb_cgroup *h_cg)
 {
-	__set_hugetlb_cgroup(page_folio(page), h_cg, true);
+	__set_hugetlb_cgroup(folio, h_cg, true);
 }
 
 static inline bool hugetlb_cgroup_disabled(void)
@@ -199,12 +199,12 @@ hugetlb_cgroup_from_folio_rsvd(struct folio *folio)
 	return NULL;
 }
 
-static inline void set_hugetlb_cgroup(struct page *page,
+static inline void set_hugetlb_cgroup(struct folio *folio,
 				     struct hugetlb_cgroup *h_cg)
 {
 }
 
-static inline void set_hugetlb_cgroup_rsvd(struct page *page,
+static inline void set_hugetlb_cgroup_rsvd(struct folio *folio,
 					  struct hugetlb_cgroup *h_cg)
 {
 }
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bcb9bfce32ee..4d98bf7ba81c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1758,19 +1758,21 @@ static void __prep_account_new_huge_page(struct hstate *h, int nid)
 	h->nr_huge_pages_node[nid]++;
 }
 
-static void __prep_new_huge_page(struct hstate *h, struct page *page)
+static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio)
 {
-	hugetlb_vmemmap_optimize(h, page);
-	INIT_LIST_HEAD(&page->lru);
-	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
-	hugetlb_set_page_subpool(page, NULL);
-	set_hugetlb_cgroup(page, NULL);
-	set_hugetlb_cgroup_rsvd(page, NULL);
+	hugetlb_vmemmap_optimize(h, &folio->page);
+	INIT_LIST_HEAD(&folio->lru);
+	folio->_folio_dtor = HUGETLB_PAGE_DTOR;
+	hugetlb_set_folio_subpool(folio, NULL);
+	set_hugetlb_cgroup(folio, NULL);
+	set_hugetlb_cgroup_rsvd(folio, NULL);
 }
 
 static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
 {
-	__prep_new_huge_page(h, page);
+	struct folio *folio = page_folio(page);
+
+	__prep_new_hugetlb_folio(h, folio);
 	spin_lock_irq(&hugetlb_lock);
 	__prep_account_new_huge_page(h, nid);
 	spin_unlock_irq(&hugetlb_lock);
@@ -2731,8 +2733,10 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
 					struct list_head *list)
 {
 	gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
-	int nid = page_to_nid(old_page);
+	struct folio *old_folio = page_folio(old_page);
+	int nid = folio_nid(old_folio);
 	struct page *new_page;
+	struct folio *new_folio;
 	int ret = 0;
 
 	/*
@@ -2745,16 +2749,17 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
 	new_page = alloc_buddy_huge_page(h, gfp_mask, nid, NULL, NULL);
 	if (!new_page)
 		return -ENOMEM;
-	__prep_new_huge_page(h, new_page);
+	new_folio = page_folio(new_page);
+	__prep_new_hugetlb_folio(h, new_folio);
 
 retry:
 	spin_lock_irq(&hugetlb_lock);
-	if (!PageHuge(old_page)) {
+	if (!folio_test_hugetlb(old_folio)) {
 		/*
 		 * Freed from under us. Drop new_page too.
 		 */
 		goto free_new;
-	} else if (page_count(old_page)) {
+	} else if (folio_ref_count(old_folio)) {
 		/*
 		 * Someone has grabbed the page, try to isolate it here.
 		 * Fail with -EBUSY if not possible.
@@ -2763,7 +2768,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
 		ret = isolate_hugetlb(old_page, list);
 		spin_lock_irq(&hugetlb_lock);
 		goto free_new;
-	} else if (!HPageFreed(old_page)) {
+	} else if (!folio_test_hugetlb(old_folio)) {
 		/*
 		 * Page's refcount is 0 but it has not been enqueued in the
 		 * freelist yet. Race window is small, so we can succeed here if
@@ -2801,7 +2806,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
 free_new:
 	spin_unlock_irq(&hugetlb_lock);
 	/* Page has a zero ref count, but needs a ref to be freed */
-	set_page_refcounted(new_page);
+	folio_ref_unfreeze(new_folio, 1);
 	update_and_free_page(h, new_page, false);
 
 	return ret;
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 600c98560a0f..692b23b5d423 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -212,7 +212,7 @@ static void hugetlb_cgroup_move_parent(int idx, struct hugetlb_cgroup *h_cg,
 	/* Take the pages off the local counter */
 	page_counter_cancel(counter, nr_pages);
 
-	set_hugetlb_cgroup(page, parent);
+	set_hugetlb_cgroup(folio, parent);
 out:
 	return;
 }
@@ -894,6 +894,7 @@ void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
 	struct hugetlb_cgroup *h_cg_rsvd;
 	struct hstate *h = page_hstate(oldhpage);
 	struct folio *old_folio = page_folio(oldhpage);
+	struct folio *new_folio = page_folio(newhpage);
 
 	if (hugetlb_cgroup_disabled())
 		return;
@@ -901,12 +902,12 @@ void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
 	spin_lock_irq(&hugetlb_lock);
 	h_cg = hugetlb_cgroup_from_folio(old_folio);
 	h_cg_rsvd = hugetlb_cgroup_from_folio_rsvd(old_folio);
-	set_hugetlb_cgroup(oldhpage, NULL);
-	set_hugetlb_cgroup_rsvd(oldhpage, NULL);
+	set_hugetlb_cgroup(old_folio, NULL);
+	set_hugetlb_cgroup_rsvd(old_folio, NULL);
 
 	/* move the h_cg details to new cgroup */
-	set_hugetlb_cgroup(newhpage, h_cg);
-	set_hugetlb_cgroup_rsvd(newhpage, h_cg_rsvd);
+	set_hugetlb_cgroup(new_folio, h_cg);
+	set_hugetlb_cgroup_rsvd(new_folio, h_cg_rsvd);
 	list_move(&newhpage->lru, &h->hugepage_activelist);
 	spin_unlock_irq(&hugetlb_lock);
 	return;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_migrate to folios
  2022-10-14  3:12 [PATCH 0/9] convert hugetlb_cgroup helper functions to folios Sidhartha Kumar
                   ` (2 preceding siblings ...)
  2022-10-14  3:12 ` [PATCH 3/9] mm/hugetlb_cgroup: convert set_hugetlb_cgroup*() " Sidhartha Kumar
@ 2022-10-14  3:12 ` Sidhartha Kumar
  2022-10-31 16:50   ` Mike Kravetz
  2022-10-14  3:12 ` [PATCH 5/9] mm/hugetlb: convert isolate_or_dissolve_huge_page " Sidhartha Kumar
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Sidhartha Kumar @ 2022-10-14  3:12 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, almasrymina, linmiaohe,
	minhquangbui99, aneesh.kumar, Sidhartha Kumar

Cleans up intermediate page to folio conversion code in
hugetlb_cgroup_migrate() by changing its arguments from pages to folios.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 include/linux/hugetlb_cgroup.h | 8 ++++----
 mm/hugetlb.c                   | 2 +-
 mm/hugetlb_cgroup.c            | 8 +++-----
 3 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index a7e3540f7f38..789b6fef176d 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -177,8 +177,8 @@ extern void hugetlb_cgroup_uncharge_file_region(struct resv_map *resv,
 						bool region_del);
 
 extern void hugetlb_cgroup_file_init(void) __init;
-extern void hugetlb_cgroup_migrate(struct page *oldhpage,
-				   struct page *newhpage);
+extern void hugetlb_cgroup_migrate(struct folio *old_folio,
+				   struct folio *new_folio);
 
 #else
 static inline void hugetlb_cgroup_uncharge_file_region(struct resv_map *resv,
@@ -286,8 +286,8 @@ static inline void hugetlb_cgroup_file_init(void)
 {
 }
 
-static inline void hugetlb_cgroup_migrate(struct page *oldhpage,
-					  struct page *newhpage)
+static inline void hugetlb_cgroup_migrate(struct folio *old_folio,
+					  struct folio *new_folio)
 {
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 4d98bf7ba81c..e2dcc9cffb2b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -7289,7 +7289,7 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason)
 {
 	struct hstate *h = page_hstate(oldpage);
 
-	hugetlb_cgroup_migrate(oldpage, newpage);
+	hugetlb_cgroup_migrate(page_folio(oldpage), page_folio(newpage));
 	set_page_owner_migrate_reason(newpage, reason);
 
 	/*
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 692b23b5d423..351ffb40261c 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -888,13 +888,11 @@ void __init hugetlb_cgroup_file_init(void)
  * hugetlb_lock will make sure a parallel cgroup rmdir won't happen
  * when we migrate hugepages
  */
-void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
+void hugetlb_cgroup_migrate(struct folio *old_folio, struct folio *new_folio)
 {
 	struct hugetlb_cgroup *h_cg;
 	struct hugetlb_cgroup *h_cg_rsvd;
-	struct hstate *h = page_hstate(oldhpage);
-	struct folio *old_folio = page_folio(oldhpage);
-	struct folio *new_folio = page_folio(newhpage);
+	struct hstate *h = folio_hstate(old_folio);
 
 	if (hugetlb_cgroup_disabled())
 		return;
@@ -908,7 +906,7 @@ void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
 	/* move the h_cg details to new cgroup */
 	set_hugetlb_cgroup(new_folio, h_cg);
 	set_hugetlb_cgroup_rsvd(new_folio, h_cg_rsvd);
-	list_move(&newhpage->lru, &h->hugepage_activelist);
+	list_move(&new_folio->lru, &h->hugepage_activelist);
 	spin_unlock_irq(&hugetlb_lock);
 	return;
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 5/9] mm/hugetlb: convert isolate_or_dissolve_huge_page to folios
  2022-10-14  3:12 [PATCH 0/9] convert hugetlb_cgroup helper functions to folios Sidhartha Kumar
                   ` (3 preceding siblings ...)
  2022-10-14  3:12 ` [PATCH 4/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_migrate " Sidhartha Kumar
@ 2022-10-14  3:12 ` Sidhartha Kumar
  2022-10-31 19:37   ` Mike Kravetz
  2022-10-14  3:13 ` [PATCH 6/9] mm/hugetlb: convert free_huge_page " Sidhartha Kumar
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Sidhartha Kumar @ 2022-10-14  3:12 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, almasrymina, linmiaohe,
	minhquangbui99, aneesh.kumar, Sidhartha Kumar

Removes a call to compound_head() by using a folio when operating on the
head page of a hugetlb compound page.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 mm/hugetlb.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e2dcc9cffb2b..44a9a6072c58 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2815,7 +2815,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
 int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list)
 {
 	struct hstate *h;
-	struct page *head;
+	struct folio *folio = page_folio(page);
 	int ret = -EBUSY;
 
 	/*
@@ -2824,9 +2824,8 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list)
 	 * Return success when racing as if we dissolved the page ourselves.
 	 */
 	spin_lock_irq(&hugetlb_lock);
-	if (PageHuge(page)) {
-		head = compound_head(page);
-		h = page_hstate(head);
+	if (folio_test_hugetlb(folio)) {
+		h = folio_hstate(folio);
 	} else {
 		spin_unlock_irq(&hugetlb_lock);
 		return 0;
@@ -2841,10 +2840,10 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list)
 	if (hstate_is_gigantic(h))
 		return -ENOMEM;
 
-	if (page_count(head) && !isolate_hugetlb(head, list))
+	if (folio_ref_count(folio) && !isolate_hugetlb(&folio->page, list))
 		ret = 0;
-	else if (!page_count(head))
-		ret = alloc_and_dissolve_huge_page(h, head, list);
+	else if (!folio_ref_count(folio))
+		ret = alloc_and_dissolve_huge_page(h, &folio->page, list);
 
 	return ret;
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6/9] mm/hugetlb: convert free_huge_page to folios
  2022-10-14  3:12 [PATCH 0/9] convert hugetlb_cgroup helper functions to folios Sidhartha Kumar
                   ` (4 preceding siblings ...)
  2022-10-14  3:12 ` [PATCH 5/9] mm/hugetlb: convert isolate_or_dissolve_huge_page " Sidhartha Kumar
@ 2022-10-14  3:13 ` Sidhartha Kumar
  2022-10-17 20:36   ` Andrew Morton
  2022-10-31 19:44   ` Mike Kravetz
  2022-10-14  3:13 ` [PATCH 7/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_uncharge_page() " Sidhartha Kumar
                   ` (2 subsequent siblings)
  8 siblings, 2 replies; 22+ messages in thread
From: Sidhartha Kumar @ 2022-10-14  3:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, almasrymina, linmiaohe,
	minhquangbui99, aneesh.kumar, Sidhartha Kumar

Use folios inside free_huge_page(), this is in preparation for converting
hugetlb_cgroup_uncharge_page() to take in a folio.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 mm/hugetlb.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 44a9a6072c58..5228c2b805d2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1688,21 +1688,22 @@ void free_huge_page(struct page *page)
 	 * Can't pass hstate in here because it is called from the
 	 * compound page destructor.
 	 */
-	struct hstate *h = page_hstate(page);
-	int nid = page_to_nid(page);
-	struct hugepage_subpool *spool = hugetlb_page_subpool(page);
+	struct folio *folio = page_folio(page);
+	struct hstate *h = folio_hstate(folio);
+	int nid = folio_nid(folio);
+	struct hugepage_subpool *spool = hugetlb_folio_subpool(folio);
 	bool restore_reserve;
 	unsigned long flags;
 
-	VM_BUG_ON_PAGE(page_count(page), page);
-	VM_BUG_ON_PAGE(page_mapcount(page), page);
+	VM_BUG_ON_FOLIO(folio_ref_count(folio), folio);
+	VM_BUG_ON_PAGE(folio_mapcount(folio), folio);
 
-	hugetlb_set_page_subpool(page, NULL);
-	if (PageAnon(page))
-		__ClearPageAnonExclusive(page);
-	page->mapping = NULL;
-	restore_reserve = HPageRestoreReserve(page);
-	ClearHPageRestoreReserve(page);
+	hugetlb_set_folio_subpool(folio, NULL);
+	if (folio_test_anon(folio))
+		__ClearPageAnonExclusive(&folio->page);
+	folio->mapping = NULL;
+	restore_reserve = folio_test_hugetlb_restore_reserve(folio);
+	folio_clear_hugetlb_restore_reserve(folio);
 
 	/*
 	 * If HPageRestoreReserve was set on page, page allocation consumed a
@@ -1724,7 +1725,7 @@ void free_huge_page(struct page *page)
 	}
 
 	spin_lock_irqsave(&hugetlb_lock, flags);
-	ClearHPageMigratable(page);
+	folio_clear_hugetlb_migratable(folio);
 	hugetlb_cgroup_uncharge_page(hstate_index(h),
 				     pages_per_huge_page(h), page);
 	hugetlb_cgroup_uncharge_page_rsvd(hstate_index(h),
@@ -1732,7 +1733,7 @@ void free_huge_page(struct page *page)
 	if (restore_reserve)
 		h->resv_huge_pages++;
 
-	if (HPageTemporary(page)) {
+	if (folio_test_hugetlb_temporary(folio)) {
 		remove_hugetlb_page(h, page, false);
 		spin_unlock_irqrestore(&hugetlb_lock, flags);
 		update_and_free_page(h, page, true);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 7/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_uncharge_page() to folios
  2022-10-14  3:12 [PATCH 0/9] convert hugetlb_cgroup helper functions to folios Sidhartha Kumar
                   ` (5 preceding siblings ...)
  2022-10-14  3:13 ` [PATCH 6/9] mm/hugetlb: convert free_huge_page " Sidhartha Kumar
@ 2022-10-14  3:13 ` Sidhartha Kumar
  2022-10-31 20:11   ` Mike Kravetz
  2022-10-14  3:13 ` [PATCH 8/9] mm/hugeltb_cgroup: convert hugetlb_cgroup_commit_charge*() " Sidhartha Kumar
  2022-10-14  3:13 ` [PATCH 9/9] mm/hugetlb: convert move_hugetlb_state() " Sidhartha Kumar
  8 siblings, 1 reply; 22+ messages in thread
From: Sidhartha Kumar @ 2022-10-14  3:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, almasrymina, linmiaohe,
	minhquangbui99, aneesh.kumar, Sidhartha Kumar

Continue to use a folio inside free_huge_page() by converting
hugetlb_cgroup_uncharge_page*() to folios.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 include/linux/hugetlb_cgroup.h | 16 ++++++++--------
 mm/hugetlb.c                   | 15 +++++++++------
 mm/hugetlb_cgroup.c            | 21 ++++++++++-----------
 3 files changed, 27 insertions(+), 25 deletions(-)

diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index 789b6fef176d..c70f92fe493e 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -158,10 +158,10 @@ extern void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
 extern void hugetlb_cgroup_commit_charge_rsvd(int idx, unsigned long nr_pages,
 					      struct hugetlb_cgroup *h_cg,
 					      struct page *page);
-extern void hugetlb_cgroup_uncharge_page(int idx, unsigned long nr_pages,
-					 struct page *page);
-extern void hugetlb_cgroup_uncharge_page_rsvd(int idx, unsigned long nr_pages,
-					      struct page *page);
+extern void hugetlb_cgroup_uncharge_folio(int idx, unsigned long nr_pages,
+					 struct folio *folio);
+extern void hugetlb_cgroup_uncharge_folio_rsvd(int idx, unsigned long nr_pages,
+					      struct folio *folio);
 
 extern void hugetlb_cgroup_uncharge_cgroup(int idx, unsigned long nr_pages,
 					   struct hugetlb_cgroup *h_cg);
@@ -254,14 +254,14 @@ hugetlb_cgroup_commit_charge_rsvd(int idx, unsigned long nr_pages,
 {
 }
 
-static inline void hugetlb_cgroup_uncharge_page(int idx, unsigned long nr_pages,
-						struct page *page)
+static inline void hugetlb_cgroup_uncharge_folio(int idx, unsigned long nr_pages,
+						struct folio *folio)
 {
 }
 
-static inline void hugetlb_cgroup_uncharge_page_rsvd(int idx,
+static inline void hugetlb_cgroup_uncharge_folio_rsvd(int idx,
 						     unsigned long nr_pages,
-						     struct page *page)
+						     struct folio *folio)
 {
 }
 static inline void hugetlb_cgroup_uncharge_cgroup(int idx,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 5228c2b805d2..d44ee677e8ec 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1726,10 +1726,10 @@ void free_huge_page(struct page *page)
 
 	spin_lock_irqsave(&hugetlb_lock, flags);
 	folio_clear_hugetlb_migratable(folio);
-	hugetlb_cgroup_uncharge_page(hstate_index(h),
-				     pages_per_huge_page(h), page);
-	hugetlb_cgroup_uncharge_page_rsvd(hstate_index(h),
-					  pages_per_huge_page(h), page);
+	hugetlb_cgroup_uncharge_folio(hstate_index(h),
+				     pages_per_huge_page(h), folio);
+	hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h),
+					  pages_per_huge_page(h), folio);
 	if (restore_reserve)
 		h->resv_huge_pages++;
 
@@ -2855,6 +2855,7 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
 	struct hugepage_subpool *spool = subpool_vma(vma);
 	struct hstate *h = hstate_vma(vma);
 	struct page *page;
+	struct folio *folio;
 	long map_chg, map_commit;
 	long gbl_chg;
 	int ret, idx;
@@ -2918,6 +2919,7 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
 	 * a reservation exists for the allocation.
 	 */
 	page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg);
+
 	if (!page) {
 		spin_unlock_irq(&hugetlb_lock);
 		page = alloc_buddy_huge_page_with_mpol(h, vma, addr);
@@ -2932,6 +2934,7 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
 		set_page_refcounted(page);
 		/* Fall through */
 	}
+	folio = page_folio(page);
 	hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, page);
 	/* If allocation is not consuming a reservation, also store the
 	 * hugetlb_cgroup pointer on the page.
@@ -2961,8 +2964,8 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
 		rsv_adjust = hugepage_subpool_put_pages(spool, 1);
 		hugetlb_acct_memory(h, -rsv_adjust);
 		if (deferred_reserve)
-			hugetlb_cgroup_uncharge_page_rsvd(hstate_index(h),
-					pages_per_huge_page(h), page);
+			hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h),
+					pages_per_huge_page(h), folio);
 	}
 	return page;
 
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 351ffb40261c..7793401acc12 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -349,11 +349,10 @@ void hugetlb_cgroup_commit_charge_rsvd(int idx, unsigned long nr_pages,
 /*
  * Should be called with hugetlb_lock held
  */
-static void __hugetlb_cgroup_uncharge_page(int idx, unsigned long nr_pages,
-					   struct page *page, bool rsvd)
+static void __hugetlb_cgroup_uncharge_folio(int idx, unsigned long nr_pages,
+					   struct folio *folio, bool rsvd)
 {
 	struct hugetlb_cgroup *h_cg;
-	struct folio *folio = page_folio(page);
 
 	if (hugetlb_cgroup_disabled())
 		return;
@@ -371,27 +370,27 @@ static void __hugetlb_cgroup_uncharge_page(int idx, unsigned long nr_pages,
 		css_put(&h_cg->css);
 	else {
 		unsigned long usage =
-			h_cg->nodeinfo[page_to_nid(page)]->usage[idx];
+			h_cg->nodeinfo[folio_nid(folio)]->usage[idx];
 		/*
 		 * This write is not atomic due to fetching usage and writing
 		 * to it, but that's fine because we call this with
 		 * hugetlb_lock held anyway.
 		 */
-		WRITE_ONCE(h_cg->nodeinfo[page_to_nid(page)]->usage[idx],
+		WRITE_ONCE(h_cg->nodeinfo[folio_nid(folio)]->usage[idx],
 			   usage - nr_pages);
 	}
 }
 
-void hugetlb_cgroup_uncharge_page(int idx, unsigned long nr_pages,
-				  struct page *page)
+void hugetlb_cgroup_uncharge_folio(int idx, unsigned long nr_pages,
+				  struct folio *folio)
 {
-	__hugetlb_cgroup_uncharge_page(idx, nr_pages, page, false);
+	__hugetlb_cgroup_uncharge_folio(idx, nr_pages, folio, false);
 }
 
-void hugetlb_cgroup_uncharge_page_rsvd(int idx, unsigned long nr_pages,
-				       struct page *page)
+void hugetlb_cgroup_uncharge_folio_rsvd(int idx, unsigned long nr_pages,
+				       struct folio *folio)
 {
-	__hugetlb_cgroup_uncharge_page(idx, nr_pages, page, true);
+	__hugetlb_cgroup_uncharge_folio(idx, nr_pages, folio, true);
 }
 
 static void __hugetlb_cgroup_uncharge_cgroup(int idx, unsigned long nr_pages,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 8/9] mm/hugeltb_cgroup: convert hugetlb_cgroup_commit_charge*() to folios
  2022-10-14  3:12 [PATCH 0/9] convert hugetlb_cgroup helper functions to folios Sidhartha Kumar
                   ` (6 preceding siblings ...)
  2022-10-14  3:13 ` [PATCH 7/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_uncharge_page() " Sidhartha Kumar
@ 2022-10-14  3:13 ` Sidhartha Kumar
  2022-10-31 20:22   ` Mike Kravetz
  2022-10-14  3:13 ` [PATCH 9/9] mm/hugetlb: convert move_hugetlb_state() " Sidhartha Kumar
  8 siblings, 1 reply; 22+ messages in thread
From: Sidhartha Kumar @ 2022-10-14  3:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, almasrymina, linmiaohe,
	minhquangbui99, aneesh.kumar, Sidhartha Kumar

Convert hugetlb_cgroup_commit_charge*() to internally use folios to clean
up the code after __set_hugetlb_cgroup() was changed to take a folio.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 mm/hugetlb_cgroup.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 7793401acc12..69939c233f4f 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -313,21 +313,21 @@ int hugetlb_cgroup_charge_cgroup_rsvd(int idx, unsigned long nr_pages,
 /* Should be called with hugetlb_lock held */
 static void __hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
 					   struct hugetlb_cgroup *h_cg,
-					   struct page *page, bool rsvd)
+					   struct folio *folio, bool rsvd)
 {
 	if (hugetlb_cgroup_disabled() || !h_cg)
 		return;
 
-	__set_hugetlb_cgroup(page_folio(page), h_cg, rsvd);
+	__set_hugetlb_cgroup(folio, h_cg, rsvd);
 	if (!rsvd) {
 		unsigned long usage =
-			h_cg->nodeinfo[page_to_nid(page)]->usage[idx];
+			h_cg->nodeinfo[folio_nid(folio)]->usage[idx];
 		/*
 		 * This write is not atomic due to fetching usage and writing
 		 * to it, but that's fine because we call this with
 		 * hugetlb_lock held anyway.
 		 */
-		WRITE_ONCE(h_cg->nodeinfo[page_to_nid(page)]->usage[idx],
+		WRITE_ONCE(h_cg->nodeinfo[folio_nid(folio)]->usage[idx],
 			   usage + nr_pages);
 	}
 }
@@ -336,14 +336,18 @@ void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
 				  struct hugetlb_cgroup *h_cg,
 				  struct page *page)
 {
-	__hugetlb_cgroup_commit_charge(idx, nr_pages, h_cg, page, false);
+	struct folio *folio = page_folio(page);
+
+	__hugetlb_cgroup_commit_charge(idx, nr_pages, h_cg, folio, false);
 }
 
 void hugetlb_cgroup_commit_charge_rsvd(int idx, unsigned long nr_pages,
 				       struct hugetlb_cgroup *h_cg,
 				       struct page *page)
 {
-	__hugetlb_cgroup_commit_charge(idx, nr_pages, h_cg, page, true);
+	struct folio *folio = page_folio(page);
+
+	__hugetlb_cgroup_commit_charge(idx, nr_pages, h_cg, folio, true);
 }
 
 /*
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 9/9] mm/hugetlb: convert move_hugetlb_state() to folios
  2022-10-14  3:12 [PATCH 0/9] convert hugetlb_cgroup helper functions to folios Sidhartha Kumar
                   ` (7 preceding siblings ...)
  2022-10-14  3:13 ` [PATCH 8/9] mm/hugeltb_cgroup: convert hugetlb_cgroup_commit_charge*() " Sidhartha Kumar
@ 2022-10-14  3:13 ` Sidhartha Kumar
  2022-10-31 20:56   ` Mike Kravetz
  8 siblings, 1 reply; 22+ messages in thread
From: Sidhartha Kumar @ 2022-10-14  3:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, almasrymina, linmiaohe,
	minhquangbui99, aneesh.kumar, Sidhartha Kumar

Clean up unmap_and_move_huge_page() by converting move_hugetlb_state() to
take in folios.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 include/linux/hugetlb.h |  6 +++---
 mm/hugetlb.c            | 22 ++++++++++++----------
 mm/migrate.c            |  2 +-
 3 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 8cd8854f41d4..bfce1b48fbaa 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -183,7 +183,7 @@ int isolate_hugetlb(struct page *page, struct list_head *list);
 int get_hwpoison_huge_page(struct page *page, bool *hugetlb);
 int get_huge_page_for_hwpoison(unsigned long pfn, int flags);
 void putback_active_hugepage(struct page *page);
-void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason);
+void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int reason);
 void free_huge_page(struct page *page);
 void hugetlb_fix_reserve_counts(struct inode *inode);
 extern struct mutex *hugetlb_fault_mutex_table;
@@ -438,8 +438,8 @@ static inline void putback_active_hugepage(struct page *page)
 {
 }
 
-static inline void move_hugetlb_state(struct page *oldpage,
-					struct page *newpage, int reason)
+static inline void move_hugetlb_state(struct folio *old_folio,
+					struct folio *new_folio, int reason)
 {
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d44ee677e8ec..351e7c8a585d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -7288,15 +7288,15 @@ void putback_active_hugepage(struct page *page)
 	put_page(page);
 }
 
-void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason)
+void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int reason)
 {
-	struct hstate *h = page_hstate(oldpage);
+	struct hstate *h = folio_hstate(old_folio);
 
-	hugetlb_cgroup_migrate(page_folio(oldpage), page_folio(newpage));
-	set_page_owner_migrate_reason(newpage, reason);
+	hugetlb_cgroup_migrate(old_folio, new_folio);
+	set_page_owner_migrate_reason(&new_folio->page, reason);
 
 	/*
-	 * transfer temporary state of the new huge page. This is
+	 * transfer temporary state of the new hugetlb folio. This is
 	 * reverse to other transitions because the newpage is going to
 	 * be final while the old one will be freed so it takes over
 	 * the temporary status.
@@ -7305,12 +7305,14 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason)
 	 * here as well otherwise the global surplus count will not match
 	 * the per-node's.
 	 */
-	if (HPageTemporary(newpage)) {
-		int old_nid = page_to_nid(oldpage);
-		int new_nid = page_to_nid(newpage);
+	if (folio_test_hugetlb_temporary(new_folio)) {
+		int old_nid = folio_nid(old_folio);
+		int new_nid = folio_nid(new_folio);
+
+
+		folio_set_hugetlb_temporary(old_folio);
+		folio_clear_hugetlb_temporary(new_folio);
 
-		SetHPageTemporary(oldpage);
-		ClearHPageTemporary(newpage);
 
 		/*
 		 * There is no need to transfer the per-node surplus state
diff --git a/mm/migrate.c b/mm/migrate.c
index 55392a706493..ff4256758447 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1328,7 +1328,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 		put_anon_vma(anon_vma);
 
 	if (rc == MIGRATEPAGE_SUCCESS) {
-		move_hugetlb_state(hpage, new_hpage, reason);
+		move_hugetlb_state(src, dst, reason);
 		put_new_page = NULL;
 	}
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 6/9] mm/hugetlb: convert free_huge_page to folios
  2022-10-14  3:13 ` [PATCH 6/9] mm/hugetlb: convert free_huge_page " Sidhartha Kumar
@ 2022-10-17 20:36   ` Andrew Morton
  2022-10-31 19:44   ` Mike Kravetz
  1 sibling, 0 replies; 22+ messages in thread
From: Andrew Morton @ 2022-10-17 20:36 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, songmuchun, mike.kravetz, willy,
	almasrymina, linmiaohe, minhquangbui99, aneesh.kumar

On Thu, 13 Oct 2022 20:13:00 -0700 Sidhartha Kumar <sidhartha.kumar@oracle.com> wrote:

> Use folios inside free_huge_page(), this is in preparation for converting
> hugetlb_cgroup_uncharge_page() to take in a folio.

I added this build fix.

--- a/mm/hugetlb.c~mm-hugetlb-convert-free_huge_page-to-folios-fix
+++ a/mm/hugetlb.c
@@ -1704,7 +1704,7 @@ void free_huge_page(struct page *page)
 	unsigned long flags;
 
 	VM_BUG_ON_FOLIO(folio_ref_count(folio), folio);
-	VM_BUG_ON_PAGE(folio_mapcount(folio), folio);
+	VM_BUG_ON_FOLIO(folio_mapcount(folio), folio);
 
 	hugetlb_set_folio_subpool(folio, NULL);
 	if (folio_test_anon(folio))
_


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/9] mm/hugetlb_cgroup: convert __set_hugetlb_cgroup() to folios
  2022-10-14  3:12 ` [PATCH 1/9] mm/hugetlb_cgroup: convert __set_hugetlb_cgroup() " Sidhartha Kumar
@ 2022-10-31 14:51   ` Mike Kravetz
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Kravetz @ 2022-10-31 14:51 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, almasrymina,
	linmiaohe, minhquangbui99, aneesh.kumar

On 10/13/22 20:12, Sidhartha Kumar wrote:
> Change __set_hugetlb_cgroup() to use folios so it is explicit that the
> function operates on a head page.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  include/linux/hugetlb_cgroup.h | 14 +++++++-------
>  mm/hugetlb_cgroup.c            |  4 ++--
>  2 files changed, 9 insertions(+), 9 deletions(-)

Sorry for the long delay in responding,

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_from_page() to folios
  2022-10-14  3:12 ` [PATCH 2/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_from_page() " Sidhartha Kumar
@ 2022-10-31 16:13   ` Mike Kravetz
  2022-11-01 16:40     ` Sidhartha Kumar
  0 siblings, 1 reply; 22+ messages in thread
From: Mike Kravetz @ 2022-10-31 16:13 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, almasrymina,
	linmiaohe, minhquangbui99, aneesh.kumar

On 10/13/22 20:12, Sidhartha Kumar wrote:
> Introduce folios in __remove_hugetlb_page() by converting
> hugetlb_cgroup_from_page() to use folios.
> 
> Also gets rid of unsed hugetlb_cgroup_from_page_resv() function.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  include/linux/hugetlb_cgroup.h | 39 +++++++++++++++++-----------------
>  mm/hugetlb.c                   |  5 +++--
>  mm/hugetlb_cgroup.c            | 13 +++++++-----
>  3 files changed, 31 insertions(+), 26 deletions(-)

Changes look fine.  However ...

> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 4133ffbbeb50..bcb9bfce32ee 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1434,9 +1434,10 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page,
>  							bool demote)
>  {
>  	int nid = page_to_nid(page);
> +	struct folio *folio = page_folio(page);
>  
> -	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
> -	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
> +	VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio(folio), folio);
> +	VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio_rsvd(folio), folio);
>  
>  	lockdep_assert_held(&hugetlb_lock);
>  	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())

... there is also this a little further in the routine.  

	if (HPageFreed(page)) {

Should probably change this to?

	if (folio_test_hugetlb_freed(folio)) {

Or, is that part of a planned subsequent change?
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/9] mm/hugetlb_cgroup: convert set_hugetlb_cgroup*() to folios
  2022-10-14  3:12 ` [PATCH 3/9] mm/hugetlb_cgroup: convert set_hugetlb_cgroup*() " Sidhartha Kumar
@ 2022-10-31 16:38   ` Mike Kravetz
  2022-11-01 16:43     ` Sidhartha Kumar
  0 siblings, 1 reply; 22+ messages in thread
From: Mike Kravetz @ 2022-10-31 16:38 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, almasrymina,
	linmiaohe, minhquangbui99, aneesh.kumar

On 10/13/22 20:12, Sidhartha Kumar wrote:
> Allows __prep_new_huge_page() to operate on a folio by converting
> set_hugetlb_cgroup*() to take in a folio.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1758,19 +1758,21 @@ static void __prep_account_new_huge_page(struct hstate *h, int nid)
>  	h->nr_huge_pages_node[nid]++;
>  }
>  
> -static void __prep_new_huge_page(struct hstate *h, struct page *page)
> +static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio)
>  {
> -	hugetlb_vmemmap_optimize(h, page);
> -	INIT_LIST_HEAD(&page->lru);
> -	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> -	hugetlb_set_page_subpool(page, NULL);
> -	set_hugetlb_cgroup(page, NULL);
> -	set_hugetlb_cgroup_rsvd(page, NULL);
> +	hugetlb_vmemmap_optimize(h, &folio->page);
> +	INIT_LIST_HEAD(&folio->lru);
> +	folio->_folio_dtor = HUGETLB_PAGE_DTOR;

Seems like we should have a routine 'set_folio_dtor' that has the same
functionality as set_compound_page_dtor.  Here, we loose the check for a
valid DTOR value (although not terribly valuable).

Not required for this patch, but something to note.

> +	hugetlb_set_folio_subpool(folio, NULL);
> +	set_hugetlb_cgroup(folio, NULL);
> +	set_hugetlb_cgroup_rsvd(folio, NULL);
>  }
>  
>  static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
>  {
> -	__prep_new_huge_page(h, page);
> +	struct folio *folio = page_folio(page);
> +
> +	__prep_new_hugetlb_folio(h, folio);
>  	spin_lock_irq(&hugetlb_lock);
>  	__prep_account_new_huge_page(h, nid);
>  	spin_unlock_irq(&hugetlb_lock);
> @@ -2731,8 +2733,10 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
>  					struct list_head *list)
>  {
>  	gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
> -	int nid = page_to_nid(old_page);
> +	struct folio *old_folio = page_folio(old_page);
> +	int nid = folio_nid(old_folio);
>  	struct page *new_page;
> +	struct folio *new_folio;
>  	int ret = 0;
>  
>  	/*
> @@ -2745,16 +2749,17 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
>  	new_page = alloc_buddy_huge_page(h, gfp_mask, nid, NULL, NULL);
>  	if (!new_page)
>  		return -ENOMEM;
> -	__prep_new_huge_page(h, new_page);
> +	new_folio = page_folio(new_page);
> +	__prep_new_hugetlb_folio(h, new_folio);
>  
>  retry:
>  	spin_lock_irq(&hugetlb_lock);
> -	if (!PageHuge(old_page)) {
> +	if (!folio_test_hugetlb(old_folio)) {
>  		/*
>  		 * Freed from under us. Drop new_page too.
>  		 */
>  		goto free_new;
> -	} else if (page_count(old_page)) {
> +	} else if (folio_ref_count(old_folio)) {
>  		/*
>  		 * Someone has grabbed the page, try to isolate it here.
>  		 * Fail with -EBUSY if not possible.
> @@ -2763,7 +2768,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
>  		ret = isolate_hugetlb(old_page, list);
>  		spin_lock_irq(&hugetlb_lock);
>  		goto free_new;
> -	} else if (!HPageFreed(old_page)) {
> +	} else if (!folio_test_hugetlb(old_folio)) {

Should that be?
	} else if (!folio_test_hugetlb_freed(old_folio)) {

-- 
Mike Kravetz

>  		/*
>  		 * Page's refcount is 0 but it has not been enqueued in the
>  		 * freelist yet. Race window is small, so we can succeed here if
> @@ -2801,7 +2806,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
>  free_new:
>  	spin_unlock_irq(&hugetlb_lock);
>  	/* Page has a zero ref count, but needs a ref to be freed */
> -	set_page_refcounted(new_page);
> +	folio_ref_unfreeze(new_folio, 1);
>  	update_and_free_page(h, new_page, false);
>  
>  	return ret;

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_migrate to folios
  2022-10-14  3:12 ` [PATCH 4/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_migrate " Sidhartha Kumar
@ 2022-10-31 16:50   ` Mike Kravetz
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Kravetz @ 2022-10-31 16:50 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, almasrymina,
	linmiaohe, minhquangbui99, aneesh.kumar

On 10/13/22 20:12, Sidhartha Kumar wrote:
> Cleans up intermediate page to folio conversion code in
> hugetlb_cgroup_migrate() by changing its arguments from pages to folios.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  include/linux/hugetlb_cgroup.h | 8 ++++----
>  mm/hugetlb.c                   | 2 +-
>  mm/hugetlb_cgroup.c            | 8 +++-----
>  3 files changed, 8 insertions(+), 10 deletions(-)

Thanks,
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 5/9] mm/hugetlb: convert isolate_or_dissolve_huge_page to folios
  2022-10-14  3:12 ` [PATCH 5/9] mm/hugetlb: convert isolate_or_dissolve_huge_page " Sidhartha Kumar
@ 2022-10-31 19:37   ` Mike Kravetz
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Kravetz @ 2022-10-31 19:37 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, almasrymina,
	linmiaohe, minhquangbui99, aneesh.kumar

On 10/13/22 20:12, Sidhartha Kumar wrote:
> Removes a call to compound_head() by using a folio when operating on the
> head page of a hugetlb compound page.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  mm/hugetlb.c | 13 ++++++-------
>  1 file changed, 6 insertions(+), 7 deletions(-)

Looks fine,

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6/9] mm/hugetlb: convert free_huge_page to folios
  2022-10-14  3:13 ` [PATCH 6/9] mm/hugetlb: convert free_huge_page " Sidhartha Kumar
  2022-10-17 20:36   ` Andrew Morton
@ 2022-10-31 19:44   ` Mike Kravetz
  1 sibling, 0 replies; 22+ messages in thread
From: Mike Kravetz @ 2022-10-31 19:44 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, almasrymina,
	linmiaohe, minhquangbui99, aneesh.kumar

On 10/13/22 20:13, Sidhartha Kumar wrote:
> Use folios inside free_huge_page(), this is in preparation for converting
> hugetlb_cgroup_uncharge_page() to take in a folio.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  mm/hugetlb.c | 27 ++++++++++++++-------------
>  1 file changed, 14 insertions(+), 13 deletions(-)

With change from Andrew,
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>

TBH, I did not notice that when first reviewing.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 7/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_uncharge_page() to folios
  2022-10-14  3:13 ` [PATCH 7/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_uncharge_page() " Sidhartha Kumar
@ 2022-10-31 20:11   ` Mike Kravetz
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Kravetz @ 2022-10-31 20:11 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, almasrymina,
	linmiaohe, minhquangbui99, aneesh.kumar

On 10/13/22 20:13, Sidhartha Kumar wrote:
> Continue to use a folio inside free_huge_page() by converting
> hugetlb_cgroup_uncharge_page*() to folios.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  include/linux/hugetlb_cgroup.h | 16 ++++++++--------
>  mm/hugetlb.c                   | 15 +++++++++------
>  mm/hugetlb_cgroup.c            | 21 ++++++++++-----------
>  3 files changed, 27 insertions(+), 25 deletions(-)

Thanks,

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 8/9] mm/hugeltb_cgroup: convert hugetlb_cgroup_commit_charge*() to folios
  2022-10-14  3:13 ` [PATCH 8/9] mm/hugeltb_cgroup: convert hugetlb_cgroup_commit_charge*() " Sidhartha Kumar
@ 2022-10-31 20:22   ` Mike Kravetz
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Kravetz @ 2022-10-31 20:22 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, almasrymina,
	linmiaohe, minhquangbui99, aneesh.kumar

On 10/13/22 20:13, Sidhartha Kumar wrote:
> Convert hugetlb_cgroup_commit_charge*() to internally use folios to clean
> up the code after __set_hugetlb_cgroup() was changed to take a folio.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  mm/hugetlb_cgroup.c | 16 ++++++++++------
>  1 file changed, 10 insertions(+), 6 deletions(-)

Thanks,

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 9/9] mm/hugetlb: convert move_hugetlb_state() to folios
  2022-10-14  3:13 ` [PATCH 9/9] mm/hugetlb: convert move_hugetlb_state() " Sidhartha Kumar
@ 2022-10-31 20:56   ` Mike Kravetz
  0 siblings, 0 replies; 22+ messages in thread
From: Mike Kravetz @ 2022-10-31 20:56 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, almasrymina,
	linmiaohe, minhquangbui99, aneesh.kumar

On 10/13/22 20:13, Sidhartha Kumar wrote:
> Clean up unmap_and_move_huge_page() by converting move_hugetlb_state() to
> take in folios.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  include/linux/hugetlb.h |  6 +++---
>  mm/hugetlb.c            | 22 ++++++++++++----------
>  mm/migrate.c            |  2 +-
>  3 files changed, 16 insertions(+), 14 deletions(-)

Looks fine with one comment,

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>

> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1328,7 +1328,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
>  		put_anon_vma(anon_vma);
>

It looks like there is a hugetlb_page_subpool(hpage) in this routine
before here that could perhaps be changed to?

	hugetlb_folio_subpool(src)

-- 
Mike Kravetz

>  	if (rc == MIGRATEPAGE_SUCCESS) {
> -		move_hugetlb_state(hpage, new_hpage, reason);
> +		move_hugetlb_state(src, dst, reason);
>  		put_new_page = NULL;
>  	}
>  
> -- 
> 2.31.1
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_from_page() to folios
  2022-10-31 16:13   ` Mike Kravetz
@ 2022-11-01 16:40     ` Sidhartha Kumar
  0 siblings, 0 replies; 22+ messages in thread
From: Sidhartha Kumar @ 2022-11-01 16:40 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, almasrymina,
	linmiaohe, minhquangbui99, aneesh.kumar



On 10/31/22 9:13 AM, Mike Kravetz wrote:
> On 10/13/22 20:12, Sidhartha Kumar wrote:
>> Introduce folios in __remove_hugetlb_page() by converting
>> hugetlb_cgroup_from_page() to use folios.
>>
>> Also gets rid of unsed hugetlb_cgroup_from_page_resv() function.
>>
>> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
>> ---
>>   include/linux/hugetlb_cgroup.h | 39 +++++++++++++++++-----------------
>>   mm/hugetlb.c                   |  5 +++--
>>   mm/hugetlb_cgroup.c            | 13 +++++++-----
>>   3 files changed, 31 insertions(+), 26 deletions(-)
> Changes look fine.  However ...
>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 4133ffbbeb50..bcb9bfce32ee 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -1434,9 +1434,10 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page,
>>   							bool demote)
>>   {
>>   	int nid = page_to_nid(page);
>> +	struct folio *folio = page_folio(page);
>>   
>> -	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
>> -	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
>> +	VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio(folio), folio);
>> +	VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio_rsvd(folio), folio);
>>   
>>   	lockdep_assert_held(&hugetlb_lock);
>>   	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> ... there is also this a little further in the routine.
>
> 	if (HPageFreed(page)) {
>
> Should probably change this to?
>
> 	if (folio_test_hugetlb_freed(folio)) {
>
> Or, is that part of a planned subsequent change?
I will be including this change in a subsequent patch series I plan to 
send out this week.

Thanks,
Sidhartha Kumar



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/9] mm/hugetlb_cgroup: convert set_hugetlb_cgroup*() to folios
  2022-10-31 16:38   ` Mike Kravetz
@ 2022-11-01 16:43     ` Sidhartha Kumar
  0 siblings, 0 replies; 22+ messages in thread
From: Sidhartha Kumar @ 2022-11-01 16:43 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, almasrymina,
	linmiaohe, minhquangbui99, aneesh.kumar



On 10/31/22 9:38 AM, Mike Kravetz wrote:
> On 10/13/22 20:12, Sidhartha Kumar wrote:
>> Allows __prep_new_huge_page() to operate on a folio by converting
>> set_hugetlb_cgroup*() to take in a folio.
>>
>> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -1758,19 +1758,21 @@ static void __prep_account_new_huge_page(struct hstate *h, int nid)
>>   	h->nr_huge_pages_node[nid]++;
>>   }
>>   
>> -static void __prep_new_huge_page(struct hstate *h, struct page *page)
>> +static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio)
>>   {
>> -	hugetlb_vmemmap_optimize(h, page);
>> -	INIT_LIST_HEAD(&page->lru);
>> -	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
>> -	hugetlb_set_page_subpool(page, NULL);
>> -	set_hugetlb_cgroup(page, NULL);
>> -	set_hugetlb_cgroup_rsvd(page, NULL);
>> +	hugetlb_vmemmap_optimize(h, &folio->page);
>> +	INIT_LIST_HEAD(&folio->lru);
>> +	folio->_folio_dtor = HUGETLB_PAGE_DTOR;
> Seems like we should have a routine 'set_folio_dtor' that has the same
> functionality as set_compound_page_dtor.  Here, we loose the check for a
> valid DTOR value (although not terribly valuable).

I agree with the need for a 'set_folio_dtor' routine, I'll send out a 
patch for that as well.

> Not required for this patch, but something to note.
>
>> +	hugetlb_set_folio_subpool(folio, NULL);
>> +	set_hugetlb_cgroup(folio, NULL);
>> +	set_hugetlb_cgroup_rsvd(folio, NULL);
>>   }
>>   
>>   static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
>>   {
>> -	__prep_new_huge_page(h, page);
>> +	struct folio *folio = page_folio(page);
>> +
>> +	__prep_new_hugetlb_folio(h, folio);
>>   	spin_lock_irq(&hugetlb_lock);
>>   	__prep_account_new_huge_page(h, nid);
>>   	spin_unlock_irq(&hugetlb_lock);
>> @@ -2731,8 +2733,10 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
>>   					struct list_head *list)
>>   {
>>   	gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
>> -	int nid = page_to_nid(old_page);
>> +	struct folio *old_folio = page_folio(old_page);
>> +	int nid = folio_nid(old_folio);
>>   	struct page *new_page;
>> +	struct folio *new_folio;
>>   	int ret = 0;
>>   
>>   	/*
>> @@ -2745,16 +2749,17 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
>>   	new_page = alloc_buddy_huge_page(h, gfp_mask, nid, NULL, NULL);
>>   	if (!new_page)
>>   		return -ENOMEM;
>> -	__prep_new_huge_page(h, new_page);
>> +	new_folio = page_folio(new_page);
>> +	__prep_new_hugetlb_folio(h, new_folio);
>>   
>>   retry:
>>   	spin_lock_irq(&hugetlb_lock);
>> -	if (!PageHuge(old_page)) {
>> +	if (!folio_test_hugetlb(old_folio)) {
>>   		/*
>>   		 * Freed from under us. Drop new_page too.
>>   		 */
>>   		goto free_new;
>> -	} else if (page_count(old_page)) {
>> +	} else if (folio_ref_count(old_folio)) {
>>   		/*
>>   		 * Someone has grabbed the page, try to isolate it here.
>>   		 * Fail with -EBUSY if not possible.
>> @@ -2763,7 +2768,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
>>   		ret = isolate_hugetlb(old_page, list);
>>   		spin_lock_irq(&hugetlb_lock);
>>   		goto free_new;
>> -	} else if (!HPageFreed(old_page)) {
>> +	} else if (!folio_test_hugetlb(old_folio)) {
> Should that be?
> 	} else if (!folio_test_hugetlb_freed(old_folio)) {

Yes good catch, I will fix in a v2.

Thanks,
Sidhartha Kumar

>


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2022-11-01 16:43 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-14  3:12 [PATCH 0/9] convert hugetlb_cgroup helper functions to folios Sidhartha Kumar
2022-10-14  3:12 ` [PATCH 1/9] mm/hugetlb_cgroup: convert __set_hugetlb_cgroup() " Sidhartha Kumar
2022-10-31 14:51   ` Mike Kravetz
2022-10-14  3:12 ` [PATCH 2/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_from_page() " Sidhartha Kumar
2022-10-31 16:13   ` Mike Kravetz
2022-11-01 16:40     ` Sidhartha Kumar
2022-10-14  3:12 ` [PATCH 3/9] mm/hugetlb_cgroup: convert set_hugetlb_cgroup*() " Sidhartha Kumar
2022-10-31 16:38   ` Mike Kravetz
2022-11-01 16:43     ` Sidhartha Kumar
2022-10-14  3:12 ` [PATCH 4/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_migrate " Sidhartha Kumar
2022-10-31 16:50   ` Mike Kravetz
2022-10-14  3:12 ` [PATCH 5/9] mm/hugetlb: convert isolate_or_dissolve_huge_page " Sidhartha Kumar
2022-10-31 19:37   ` Mike Kravetz
2022-10-14  3:13 ` [PATCH 6/9] mm/hugetlb: convert free_huge_page " Sidhartha Kumar
2022-10-17 20:36   ` Andrew Morton
2022-10-31 19:44   ` Mike Kravetz
2022-10-14  3:13 ` [PATCH 7/9] mm/hugetlb_cgroup: convert hugetlb_cgroup_uncharge_page() " Sidhartha Kumar
2022-10-31 20:11   ` Mike Kravetz
2022-10-14  3:13 ` [PATCH 8/9] mm/hugeltb_cgroup: convert hugetlb_cgroup_commit_charge*() " Sidhartha Kumar
2022-10-31 20:22   ` Mike Kravetz
2022-10-14  3:13 ` [PATCH 9/9] mm/hugetlb: convert move_hugetlb_state() " Sidhartha Kumar
2022-10-31 20:56   ` Mike Kravetz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).