linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH mm-unstable 0/9] continue hugetlb folio conversions
@ 2023-01-03 19:13 Sidhartha Kumar
  2023-01-03 19:13 ` [PATCH mm-unstable 1/8] mm/hugetlb: convert isolate_hugetlb to folios Sidhartha Kumar
                   ` (7 more replies)
  0 siblings, 8 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2023-01-03 19:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, tsahu, jhubbard, Sidhartha Kumar

============== OVERVIEW ===========================
This series continues the conversion of core hugetlb functions to use
folios. This series converts many helper funtions in the hugetlb fault
path. This is in preperation for another series to convert the hugetlb
fault code paths to operate on folios.

============== TESTING ===========================
LTP:
	Ran 10 back to back rounds of the LTP hugetlb test suite.

Gigantic Huge Pages:
	Test allocation and freeing via hugeadm commands:
		hugeadm --pool-pages-min 1GB:10
		hugeadm --pool-pages-min 1GB:0

Demote:
	Demote 1 1GB hugepages to 512 2MB hugepages
		echo 1 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
		echo 1 > /sys/kernel/mm/hugepages/hugepages-1048576kB/demote
		cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
			# 512
		cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
			# 0

Rebased on 1/3/2023 mm-unstable

Sidhartha Kumar (8):
  mm/hugetlb: convert isolate_hugetlb to folios
  mm/hugetlb: convert __update_and_free_page() to folios
  mm/hugetlb: convert dequeue_hugetlb_page functions to folios
  mm/hugetlb: convert alloc_surplus_huge_page() to folios
  mm/hugetlb: increase use of folios in alloc_huge_page()
  mm/hugetlb: convert alloc_migrate_huge_page to folios
  mm/hugetlb: convert restore_reserve_on_error() to folios
  mm/hugetlb: convert demote_free_huge_page to folios

 include/linux/hugetlb.h        |  10 +-
 include/linux/hugetlb_cgroup.h |   8 +-
 include/linux/mm.h             |   5 +
 mm/gup.c                       |   2 +-
 mm/hugetlb.c                   | 213 +++++++++++++++++----------------
 mm/hugetlb_cgroup.c            |   8 +-
 mm/memory-failure.c            |   2 +-
 mm/memory_hotplug.c            |   2 +-
 mm/mempolicy.c                 |   2 +-
 mm/migrate.c                   |   7 +-
 10 files changed, 136 insertions(+), 123 deletions(-)

-- 
2.39.0


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH mm-unstable 1/8] mm/hugetlb: convert isolate_hugetlb to folios
  2023-01-03 19:13 [PATCH mm-unstable 0/9] continue hugetlb folio conversions Sidhartha Kumar
@ 2023-01-03 19:13 ` Sidhartha Kumar
  2023-01-03 20:56   ` Matthew Wilcox
  2023-01-06 23:04   ` Mike Kravetz
  2023-01-03 19:13 ` [PATCH mm-unstable 2/8] mm/hugetlb: convert __update_and_free_page() " Sidhartha Kumar
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2023-01-03 19:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, tsahu, jhubbard, Sidhartha Kumar

Convert isolate_hugetlb() to take in a folio and convert its callers to
pass a folio. Using page_folio() to convert the callers to use a folio is
safe as isolate_hugetlb() operates on a head page.

Also add a folio equivalent of get_page_unless_zero().

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 include/linux/hugetlb.h |  4 ++--
 include/linux/mm.h      |  5 +++++
 mm/gup.c                |  2 +-
 mm/hugetlb.c            | 16 ++++++++--------
 mm/memory-failure.c     |  2 +-
 mm/memory_hotplug.c     |  2 +-
 mm/mempolicy.c          |  2 +-
 mm/migrate.c            |  2 +-
 8 files changed, 20 insertions(+), 15 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 551834cd5299..482929b2d044 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -169,7 +169,7 @@ bool hugetlb_reserve_pages(struct inode *inode, long from, long to,
 						vm_flags_t vm_flags);
 long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
 						long freed);
-int isolate_hugetlb(struct page *page, struct list_head *list);
+int isolate_hugetlb(struct folio *folio, struct list_head *list);
 int get_hwpoison_huge_page(struct page *page, bool *hugetlb, bool unpoison);
 int get_huge_page_for_hwpoison(unsigned long pfn, int flags,
 				bool *migratable_cleared);
@@ -374,7 +374,7 @@ static inline pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
 	return NULL;
 }
 
-static inline int isolate_hugetlb(struct page *page, struct list_head *list)
+static inline int isolate_hugetlb(struct folio *folio, struct list_head *list)
 {
 	return -EBUSY;
 }
diff --git a/include/linux/mm.h b/include/linux/mm.h
index e2dd5a37d078..cd8508d728f1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -775,6 +775,11 @@ static inline bool get_page_unless_zero(struct page *page)
 	return page_ref_add_unless(page, 1, 0);
 }
 
+static inline bool get_folio_unless_zero(struct folio *folio)
+{
+	return folio_ref_add_unless(folio, 1, 0);
+}
+
 extern int page_is_ram(unsigned long pfn);
 
 enum {
diff --git a/mm/gup.c b/mm/gup.c
index 5182abaaecde..bdb00b9df89e 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1843,7 +1843,7 @@ static unsigned long collect_longterm_unpinnable_pages(
 			continue;
 
 		if (folio_test_hugetlb(folio)) {
-			isolate_hugetlb(&folio->page, movable_page_list);
+			isolate_hugetlb(folio, movable_page_list);
 			continue;
 		}
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0c58f6519b9a..90c6f0402c7b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2781,7 +2781,7 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
 		 * Fail with -EBUSY if not possible.
 		 */
 		spin_unlock_irq(&hugetlb_lock);
-		ret = isolate_hugetlb(&old_folio->page, list);
+		ret = isolate_hugetlb(old_folio, list);
 		spin_lock_irq(&hugetlb_lock);
 		goto free_new;
 	} else if (!folio_test_hugetlb_freed(old_folio)) {
@@ -2856,7 +2856,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list)
 	if (hstate_is_gigantic(h))
 		return -ENOMEM;
 
-	if (folio_ref_count(folio) && !isolate_hugetlb(&folio->page, list))
+	if (folio_ref_count(folio) && !isolate_hugetlb(folio, list))
 		ret = 0;
 	else if (!folio_ref_count(folio))
 		ret = alloc_and_dissolve_hugetlb_folio(h, folio, list);
@@ -7271,19 +7271,19 @@ __weak unsigned long hugetlb_mask_last_page(struct hstate *h)
  * These functions are overwritable if your architecture needs its own
  * behavior.
  */
-int isolate_hugetlb(struct page *page, struct list_head *list)
+int isolate_hugetlb(struct folio *folio, struct list_head *list)
 {
 	int ret = 0;
 
 	spin_lock_irq(&hugetlb_lock);
-	if (!PageHeadHuge(page) ||
-	    !HPageMigratable(page) ||
-	    !get_page_unless_zero(page)) {
+	if (!folio_test_hugetlb(folio) ||
+	    !folio_test_hugetlb_migratable(folio) ||
+	    !get_folio_unless_zero(folio)) {
 		ret = -EBUSY;
 		goto unlock;
 	}
-	ClearHPageMigratable(page);
-	list_move_tail(&page->lru, list);
+	folio_clear_hugetlb_migratable(folio);
+	list_move_tail(&folio->lru, list);
 unlock:
 	spin_unlock_irq(&hugetlb_lock);
 	return ret;
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 63d8501001c6..cf60c0fa795c 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2438,7 +2438,7 @@ static bool isolate_page(struct page *page, struct list_head *pagelist)
 	bool isolated = false;
 
 	if (PageHuge(page)) {
-		isolated = !isolate_hugetlb(page, pagelist);
+		isolated = !isolate_hugetlb(page_folio(page), pagelist);
 	} else {
 		bool lru = !__PageMovable(page);
 
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index fd40f7e9f176..a1e8c3e9ab08 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1641,7 +1641,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 
 		if (PageHuge(page)) {
 			pfn = page_to_pfn(head) + compound_nr(head) - 1;
-			isolate_hugetlb(head, &source);
+			isolate_hugetlb(folio, &source);
 			continue;
 		} else if (PageTransHuge(page))
 			pfn = page_to_pfn(head) + thp_nr_pages(page) - 1;
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 61aa9aedb728..4e62b26539c9 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -601,7 +601,7 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
 	/* With MPOL_MF_MOVE, we migrate only unshared hugepage. */
 	if (flags & (MPOL_MF_MOVE_ALL) ||
 	    (flags & MPOL_MF_MOVE && page_mapcount(page) == 1)) {
-		if (isolate_hugetlb(page, qp->pagelist) &&
+		if (isolate_hugetlb(page_folio(page), qp->pagelist) &&
 			(flags & MPOL_MF_STRICT))
 			/*
 			 * Failed to isolate page but allow migrating pages
diff --git a/mm/migrate.c b/mm/migrate.c
index 4aea647a0180..6932b3d5a9dd 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1732,7 +1732,7 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
 
 	if (PageHuge(page)) {
 		if (PageHead(page)) {
-			err = isolate_hugetlb(page, pagelist);
+			err = isolate_hugetlb(page_folio(page), pagelist);
 			if (!err)
 				err = 1;
 		}
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH mm-unstable 2/8] mm/hugetlb: convert __update_and_free_page() to folios
  2023-01-03 19:13 [PATCH mm-unstable 0/9] continue hugetlb folio conversions Sidhartha Kumar
  2023-01-03 19:13 ` [PATCH mm-unstable 1/8] mm/hugetlb: convert isolate_hugetlb to folios Sidhartha Kumar
@ 2023-01-03 19:13 ` Sidhartha Kumar
  2023-01-06 23:46   ` Mike Kravetz
  2023-01-03 19:13 ` [PATCH mm-unstable 3/8] mm/hugetlb: convert dequeue_hugetlb_page_node functions " Sidhartha Kumar
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 27+ messages in thread
From: Sidhartha Kumar @ 2023-01-03 19:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, tsahu, jhubbard, Sidhartha Kumar

Change __update_and_free_page() to __update_and_free_hugetlb_folio() by
changing its callers to pass in a folio.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 mm/hugetlb.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 90c6f0402c7b..b06ec8d60794 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1556,10 +1556,10 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio,
 	enqueue_hugetlb_folio(h, folio);
 }
 
-static void __update_and_free_page(struct hstate *h, struct page *page)
+static void __update_and_free_hugetlb_folio(struct hstate *h,
+						struct folio *folio)
 {
 	int i;
-	struct folio *folio = page_folio(page);
 	struct page *subpage;
 
 	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
@@ -1572,7 +1572,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page)
 	if (folio_test_hugetlb_raw_hwp_unreliable(folio))
 		return;
 
-	if (hugetlb_vmemmap_restore(h, page)) {
+	if (hugetlb_vmemmap_restore(h, &folio->page)) {
 		spin_lock_irq(&hugetlb_lock);
 		/*
 		 * If we cannot allocate vmemmap pages, just refuse to free the
@@ -1608,7 +1608,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page)
 		destroy_compound_gigantic_folio(folio, huge_page_order(h));
 		free_gigantic_folio(folio, huge_page_order(h));
 	} else {
-		__free_pages(page, huge_page_order(h));
+		__free_pages(&folio->page, huge_page_order(h));
 	}
 }
 
@@ -1648,7 +1648,7 @@ static void free_hpage_workfn(struct work_struct *work)
 		 */
 		h = size_to_hstate(page_size(page));
 
-		__update_and_free_page(h, page);
+		__update_and_free_hugetlb_folio(h, page_folio(page));
 
 		cond_resched();
 	}
@@ -1665,7 +1665,7 @@ static void update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio,
 				 bool atomic)
 {
 	if (!folio_test_hugetlb_vmemmap_optimized(folio) || !atomic) {
-		__update_and_free_page(h, &folio->page);
+		__update_and_free_hugetlb_folio(h, folio);
 		return;
 	}
 
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH mm-unstable 3/8] mm/hugetlb: convert dequeue_hugetlb_page_node functions to folios
  2023-01-03 19:13 [PATCH mm-unstable 0/9] continue hugetlb folio conversions Sidhartha Kumar
  2023-01-03 19:13 ` [PATCH mm-unstable 1/8] mm/hugetlb: convert isolate_hugetlb to folios Sidhartha Kumar
  2023-01-03 19:13 ` [PATCH mm-unstable 2/8] mm/hugetlb: convert __update_and_free_page() " Sidhartha Kumar
@ 2023-01-03 19:13 ` Sidhartha Kumar
  2023-01-03 21:00   ` Matthew Wilcox
  2023-01-03 19:13 ` [PATCH mm-unstable 4/8] mm/hugetlb: convert alloc_surplus_huge_page() " Sidhartha Kumar
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 27+ messages in thread
From: Sidhartha Kumar @ 2023-01-03 19:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, tsahu, jhubbard, Sidhartha Kumar

dequeue_huge_page_node_exact() is changed to dequeue_hugetlb_folio_node_
exact() and dequeue_huge_page_nodemask() is changed to dequeue_hugetlb_
folio_nodemask(). Update their callers to pass in a folio.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 mm/hugetlb.c | 55 +++++++++++++++++++++++++++++-----------------------
 1 file changed, 31 insertions(+), 24 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b06ec8d60794..8dffb77d3510 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1140,32 +1140,36 @@ static void enqueue_hugetlb_folio(struct hstate *h, struct folio *folio)
 	folio_set_hugetlb_freed(folio);
 }
 
-static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
+static struct folio *dequeue_hugetlb_folio_node_exact(struct hstate *h,
+								int nid)
 {
 	struct page *page;
+	struct folio *folio;
 	bool pin = !!(current->flags & PF_MEMALLOC_PIN);
 
 	lockdep_assert_held(&hugetlb_lock);
 	list_for_each_entry(page, &h->hugepage_freelists[nid], lru) {
-		if (pin && !is_longterm_pinnable_page(page))
+		folio = page_folio(page);
+
+		if (pin && !folio_is_longterm_pinnable(folio))
 			continue;
 
-		if (PageHWPoison(page))
+		if (folio_test_hwpoison(folio))
 			continue;
 
-		list_move(&page->lru, &h->hugepage_activelist);
-		set_page_refcounted(page);
-		ClearHPageFreed(page);
+		list_move(&folio->lru, &h->hugepage_activelist);
+		folio_ref_unfreeze(folio, 1);
+		folio_clear_hugetlb_freed(folio);
 		h->free_huge_pages--;
 		h->free_huge_pages_node[nid]--;
-		return page;
+		return folio;
 	}
 
 	return NULL;
 }
 
-static struct page *dequeue_huge_page_nodemask(struct hstate *h, gfp_t gfp_mask, int nid,
-		nodemask_t *nmask)
+static struct folio *dequeue_hugetlb_folio_nodemask(struct hstate *h, gfp_t gfp_mask,
+							int nid, nodemask_t *nmask)
 {
 	unsigned int cpuset_mems_cookie;
 	struct zonelist *zonelist;
@@ -1178,7 +1182,7 @@ static struct page *dequeue_huge_page_nodemask(struct hstate *h, gfp_t gfp_mask,
 retry_cpuset:
 	cpuset_mems_cookie = read_mems_allowed_begin();
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, gfp_zone(gfp_mask), nmask) {
-		struct page *page;
+		struct folio *folio;
 
 		if (!cpuset_zone_allowed(zone, gfp_mask))
 			continue;
@@ -1190,9 +1194,9 @@ static struct page *dequeue_huge_page_nodemask(struct hstate *h, gfp_t gfp_mask,
 			continue;
 		node = zone_to_nid(zone);
 
-		page = dequeue_huge_page_node_exact(h, node);
-		if (page)
-			return page;
+		folio = dequeue_hugetlb_folio_node_exact(h, node);
+		if (folio)
+			return folio;
 	}
 	if (unlikely(read_mems_allowed_retry(cpuset_mems_cookie)))
 		goto retry_cpuset;
@@ -1210,7 +1214,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h,
 				unsigned long address, int avoid_reserve,
 				long chg)
 {
-	struct page *page = NULL;
+	struct folio *folio = NULL;
 	struct mempolicy *mpol;
 	gfp_t gfp_mask;
 	nodemask_t *nodemask;
@@ -1232,22 +1236,24 @@ static struct page *dequeue_huge_page_vma(struct hstate *h,
 	nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
 
 	if (mpol_is_preferred_many(mpol)) {
-		page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
+		folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask,
+							nid, nodemask);
 
 		/* Fallback to all nodes if page==NULL */
 		nodemask = NULL;
 	}
 
-	if (!page)
-		page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
+	if (!folio)
+		folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask,
+							nid, nodemask);
 
-	if (page && !avoid_reserve && vma_has_reserves(vma, chg)) {
-		SetHPageRestoreReserve(page);
+	if (folio && !avoid_reserve && vma_has_reserves(vma, chg)) {
+		folio_set_hugetlb_restore_reserve(folio);
 		h->resv_huge_pages--;
 	}
 
 	mpol_cond_put(mpol);
-	return page;
+	return &folio->page;
 
 err:
 	return NULL;
@@ -2331,12 +2337,13 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid,
 {
 	spin_lock_irq(&hugetlb_lock);
 	if (available_huge_pages(h)) {
-		struct page *page;
+		struct folio *folio;
 
-		page = dequeue_huge_page_nodemask(h, gfp_mask, preferred_nid, nmask);
-		if (page) {
+		folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask,
+						preferred_nid, nmask);
+		if (folio) {
 			spin_unlock_irq(&hugetlb_lock);
-			return page;
+			return &folio->page;
 		}
 	}
 	spin_unlock_irq(&hugetlb_lock);
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH mm-unstable 4/8] mm/hugetlb: convert alloc_surplus_huge_page() to folios
  2023-01-03 19:13 [PATCH mm-unstable 0/9] continue hugetlb folio conversions Sidhartha Kumar
                   ` (2 preceding siblings ...)
  2023-01-03 19:13 ` [PATCH mm-unstable 3/8] mm/hugetlb: convert dequeue_hugetlb_page_node functions " Sidhartha Kumar
@ 2023-01-03 19:13 ` Sidhartha Kumar
  2023-01-07  0:15   ` Mike Kravetz
  2023-01-03 19:13 ` [PATCH mm-unstable 5/8] mm/hugetlb: increase use of folios in alloc_huge_page() Sidhartha Kumar
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 27+ messages in thread
From: Sidhartha Kumar @ 2023-01-03 19:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, tsahu, jhubbard, Sidhartha Kumar

Change alloc_surplus_huge_page() to alloc_surplus_hugetlb_folio() and
update its callers.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 mm/hugetlb.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 8dffb77d3510..0b8bab52bc7e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2237,8 +2237,8 @@ int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
 /*
  * Allocates a fresh surplus page from the page allocator.
  */
-static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask,
-						int nid, nodemask_t *nmask)
+static struct folio *alloc_surplus_hugetlb_folio(struct hstate *h,
+				gfp_t gfp_mask,	int nid, nodemask_t *nmask)
 {
 	struct folio *folio = NULL;
 
@@ -2275,7 +2275,7 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask,
 out_unlock:
 	spin_unlock_irq(&hugetlb_lock);
 
-	return &folio->page;
+	return folio;
 }
 
 static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask,
@@ -2308,7 +2308,7 @@ static
 struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h,
 		struct vm_area_struct *vma, unsigned long addr)
 {
-	struct page *page = NULL;
+	struct folio *folio = NULL;
 	struct mempolicy *mpol;
 	gfp_t gfp_mask = htlb_alloc_mask(h);
 	int nid;
@@ -2319,16 +2319,16 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h,
 		gfp_t gfp = gfp_mask | __GFP_NOWARN;
 
 		gfp &=  ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL);
-		page = alloc_surplus_huge_page(h, gfp, nid, nodemask);
+		folio = alloc_surplus_hugetlb_folio(h, gfp, nid, nodemask);
 
 		/* Fallback to all nodes if page==NULL */
 		nodemask = NULL;
 	}
 
-	if (!page)
-		page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask);
+	if (!folio)
+		folio = alloc_surplus_hugetlb_folio(h, gfp_mask, nid, nodemask);
 	mpol_cond_put(mpol);
-	return page;
+	return &folio->page;
 }
 
 /* page migration callback function */
@@ -2377,6 +2377,7 @@ static int gather_surplus_pages(struct hstate *h, long delta)
 	__must_hold(&hugetlb_lock)
 {
 	LIST_HEAD(surplus_list);
+	struct folio *folio;
 	struct page *page, *tmp;
 	int ret;
 	long i;
@@ -2396,13 +2397,13 @@ static int gather_surplus_pages(struct hstate *h, long delta)
 retry:
 	spin_unlock_irq(&hugetlb_lock);
 	for (i = 0; i < needed; i++) {
-		page = alloc_surplus_huge_page(h, htlb_alloc_mask(h),
+		folio = alloc_surplus_hugetlb_folio(h, htlb_alloc_mask(h),
 				NUMA_NO_NODE, NULL);
-		if (!page) {
+		if (!folio) {
 			alloc_ok = false;
 			break;
 		}
-		list_add(&page->lru, &surplus_list);
+		list_add(&folio->lru, &surplus_list);
 		cond_resched();
 	}
 	allocated += i;
@@ -3355,7 +3356,7 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
 	 * First take pages out of surplus state.  Then make up the
 	 * remaining difference by allocating fresh huge pages.
 	 *
-	 * We might race with alloc_surplus_huge_page() here and be unable
+	 * We might race with alloc_surplus_hugetlb_folio() here and be unable
 	 * to convert a surplus huge page to a normal huge page. That is
 	 * not critical, though, it just means the overall size of the
 	 * pool might be one hugepage larger than it needs to be, but
@@ -3398,7 +3399,7 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
 	 * By placing pages into the surplus state independent of the
 	 * overcommit value, we are allowing the surplus pool size to
 	 * exceed overcommit. There are few sane options here. Since
-	 * alloc_surplus_huge_page() is checking the global counter,
+	 * alloc_surplus_hugetlb_folio() is checking the global counter,
 	 * though, we'll note that we're not allowed to exceed surplus
 	 * and won't grow the pool anywhere else. Not until one of the
 	 * sysctls are changed, or the surplus pages go out of use.
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH mm-unstable 5/8] mm/hugetlb: increase use of folios in alloc_huge_page()
  2023-01-03 19:13 [PATCH mm-unstable 0/9] continue hugetlb folio conversions Sidhartha Kumar
                   ` (3 preceding siblings ...)
  2023-01-03 19:13 ` [PATCH mm-unstable 4/8] mm/hugetlb: convert alloc_surplus_huge_page() " Sidhartha Kumar
@ 2023-01-03 19:13 ` Sidhartha Kumar
  2023-01-07  0:30   ` Mike Kravetz
  2023-01-03 19:13 ` [PATCH mm-unstable 6/8] mm/hugetlb: convert alloc_migrate_huge_page to folios Sidhartha Kumar
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 27+ messages in thread
From: Sidhartha Kumar @ 2023-01-03 19:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, tsahu, jhubbard, Sidhartha Kumar

Change hugetlb_cgroup_commit_charge{,_rsvd}(), dequeue_huge_page_vma()
and alloc_buddy_huge_page_with_mpol() to use folios so alloc_huge_page()
is cleaned by operating on folios until its return.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 include/linux/hugetlb_cgroup.h |  8 ++++----
 mm/hugetlb.c                   | 33 ++++++++++++++++-----------------
 mm/hugetlb_cgroup.c            |  8 ++------
 3 files changed, 22 insertions(+), 27 deletions(-)

diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index f706626a8063..3d82d91f49ac 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -141,10 +141,10 @@ extern int hugetlb_cgroup_charge_cgroup_rsvd(int idx, unsigned long nr_pages,
 					     struct hugetlb_cgroup **ptr);
 extern void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
 					 struct hugetlb_cgroup *h_cg,
-					 struct page *page);
+					 struct folio *folio);
 extern void hugetlb_cgroup_commit_charge_rsvd(int idx, unsigned long nr_pages,
 					      struct hugetlb_cgroup *h_cg,
-					      struct page *page);
+					      struct folio *folio);
 extern void hugetlb_cgroup_uncharge_folio(int idx, unsigned long nr_pages,
 					 struct folio *folio);
 extern void hugetlb_cgroup_uncharge_folio_rsvd(int idx, unsigned long nr_pages,
@@ -230,14 +230,14 @@ static inline int hugetlb_cgroup_charge_cgroup_rsvd(int idx,
 
 static inline void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
 						struct hugetlb_cgroup *h_cg,
-						struct page *page)
+						struct folio *folio)
 {
 }
 
 static inline void
 hugetlb_cgroup_commit_charge_rsvd(int idx, unsigned long nr_pages,
 				  struct hugetlb_cgroup *h_cg,
-				  struct page *page)
+				  struct folio *folio)
 {
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0b8bab52bc7e..640ca4eaccf2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1209,7 +1209,7 @@ static unsigned long available_huge_pages(struct hstate *h)
 	return h->free_huge_pages - h->resv_huge_pages;
 }
 
-static struct page *dequeue_huge_page_vma(struct hstate *h,
+static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
 				struct vm_area_struct *vma,
 				unsigned long address, int avoid_reserve,
 				long chg)
@@ -1253,7 +1253,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h,
 	}
 
 	mpol_cond_put(mpol);
-	return &folio->page;
+	return folio;
 
 err:
 	return NULL;
@@ -2305,7 +2305,7 @@ static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask,
  * Use the VMA's mpolicy to allocate a huge page from the buddy.
  */
 static
-struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h,
+struct folio *alloc_buddy_hugetlb_folio_with_mpol(struct hstate *h,
 		struct vm_area_struct *vma, unsigned long addr)
 {
 	struct folio *folio = NULL;
@@ -2328,7 +2328,7 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h,
 	if (!folio)
 		folio = alloc_surplus_hugetlb_folio(h, gfp_mask, nid, nodemask);
 	mpol_cond_put(mpol);
-	return &folio->page;
+	return folio;
 }
 
 /* page migration callback function */
@@ -2877,7 +2877,6 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
 {
 	struct hugepage_subpool *spool = subpool_vma(vma);
 	struct hstate *h = hstate_vma(vma);
-	struct page *page;
 	struct folio *folio;
 	long map_chg, map_commit;
 	long gbl_chg;
@@ -2941,34 +2940,34 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
 	 * from the global free pool (global change).  gbl_chg == 0 indicates
 	 * a reservation exists for the allocation.
 	 */
-	page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg);
-	if (!page) {
+	folio = dequeue_hugetlb_folio_vma(h, vma, addr, avoid_reserve, gbl_chg);
+	if (!folio) {
 		spin_unlock_irq(&hugetlb_lock);
-		page = alloc_buddy_huge_page_with_mpol(h, vma, addr);
-		if (!page)
+		folio = alloc_buddy_hugetlb_folio_with_mpol(h, vma, addr);
+		if (!folio)
 			goto out_uncharge_cgroup;
 		spin_lock_irq(&hugetlb_lock);
 		if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
-			SetHPageRestoreReserve(page);
+			folio_set_hugetlb_restore_reserve(folio);
 			h->resv_huge_pages--;
 		}
-		list_add(&page->lru, &h->hugepage_activelist);
-		set_page_refcounted(page);
+		list_add(&folio->lru, &h->hugepage_activelist);
+		folio_ref_unfreeze(folio, 1);
 		/* Fall through */
 	}
-	folio = page_folio(page);
-	hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, page);
+
+	hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, folio);
 	/* If allocation is not consuming a reservation, also store the
 	 * hugetlb_cgroup pointer on the page.
 	 */
 	if (deferred_reserve) {
 		hugetlb_cgroup_commit_charge_rsvd(idx, pages_per_huge_page(h),
-						  h_cg, page);
+						  h_cg, folio);
 	}
 
 	spin_unlock_irq(&hugetlb_lock);
 
-	hugetlb_set_page_subpool(page, spool);
+	hugetlb_set_folio_subpool(folio, spool);
 
 	map_commit = vma_commit_reservation(h, vma, addr);
 	if (unlikely(map_chg > map_commit)) {
@@ -2989,7 +2988,7 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
 			hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h),
 					pages_per_huge_page(h), folio);
 	}
-	return page;
+	return &folio->page;
 
 out_uncharge_cgroup:
 	hugetlb_cgroup_uncharge_cgroup(idx, pages_per_huge_page(h), h_cg);
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index d9e4425d81ac..dedd2edb076e 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -331,19 +331,15 @@ static void __hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
 
 void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
 				  struct hugetlb_cgroup *h_cg,
-				  struct page *page)
+				  struct folio *folio)
 {
-	struct folio *folio = page_folio(page);
-
 	__hugetlb_cgroup_commit_charge(idx, nr_pages, h_cg, folio, false);
 }
 
 void hugetlb_cgroup_commit_charge_rsvd(int idx, unsigned long nr_pages,
 				       struct hugetlb_cgroup *h_cg,
-				       struct page *page)
+				       struct folio *folio)
 {
-	struct folio *folio = page_folio(page);
-
 	__hugetlb_cgroup_commit_charge(idx, nr_pages, h_cg, folio, true);
 }
 
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH mm-unstable 6/8] mm/hugetlb: convert alloc_migrate_huge_page to folios
  2023-01-03 19:13 [PATCH mm-unstable 0/9] continue hugetlb folio conversions Sidhartha Kumar
                   ` (4 preceding siblings ...)
  2023-01-03 19:13 ` [PATCH mm-unstable 5/8] mm/hugetlb: increase use of folios in alloc_huge_page() Sidhartha Kumar
@ 2023-01-03 19:13 ` Sidhartha Kumar
  2023-01-07  0:54   ` Mike Kravetz
  2023-01-03 19:13 ` [PATCH mm-unstable 7/8] mm/hugetlb: convert restore_reserve_on_error() " Sidhartha Kumar
  2023-01-03 19:13 ` [PATCH mm-unstable 8/8] mm/hugetlb: convert demote_free_huge_page " Sidhartha Kumar
  7 siblings, 1 reply; 27+ messages in thread
From: Sidhartha Kumar @ 2023-01-03 19:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, tsahu, jhubbard, Sidhartha Kumar

Change alloc_huge_page_nodemask() to alloc_hugetlb_folio_nodemask() and
alloc_migrate_huge_page() to alloc_migrate_hugetlb_folio(). Both functions
now return a folio rather than a page.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 include/linux/hugetlb.h |  6 +++---
 mm/hugetlb.c            | 18 +++++++++---------
 mm/migrate.c            |  5 ++++-
 3 files changed, 16 insertions(+), 13 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 482929b2d044..a853c13d8308 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -680,7 +680,7 @@ struct huge_bootmem_page {
 int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list);
 struct page *alloc_huge_page(struct vm_area_struct *vma,
 				unsigned long addr, int avoid_reserve);
-struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid,
+struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
 				nodemask_t *nmask, gfp_t gfp_mask);
 struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma,
 				unsigned long address);
@@ -1001,8 +1001,8 @@ static inline struct page *alloc_huge_page(struct vm_area_struct *vma,
 	return NULL;
 }
 
-static inline struct page *
-alloc_huge_page_nodemask(struct hstate *h, int preferred_nid,
+static inline struct folio *
+alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
 			nodemask_t *nmask, gfp_t gfp_mask)
 {
 	return NULL;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 640ca4eaccf2..0db01718d1c3 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2278,7 +2278,7 @@ static struct folio *alloc_surplus_hugetlb_folio(struct hstate *h,
 	return folio;
 }
 
-static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask,
+static struct folio *alloc_migrate_hugetlb_folio(struct hstate *h, gfp_t gfp_mask,
 				     int nid, nodemask_t *nmask)
 {
 	struct folio *folio;
@@ -2298,7 +2298,7 @@ static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask,
 	 */
 	folio_set_hugetlb_temporary(folio);
 
-	return &folio->page;
+	return folio;
 }
 
 /*
@@ -2331,8 +2331,8 @@ struct folio *alloc_buddy_hugetlb_folio_with_mpol(struct hstate *h,
 	return folio;
 }
 
-/* page migration callback function */
-struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid,
+/* folio migration callback function */
+struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
 		nodemask_t *nmask, gfp_t gfp_mask)
 {
 	spin_lock_irq(&hugetlb_lock);
@@ -2343,12 +2343,12 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid,
 						preferred_nid, nmask);
 		if (folio) {
 			spin_unlock_irq(&hugetlb_lock);
-			return &folio->page;
+			return folio;
 		}
 	}
 	spin_unlock_irq(&hugetlb_lock);
 
-	return alloc_migrate_huge_page(h, gfp_mask, preferred_nid, nmask);
+	return alloc_migrate_hugetlb_folio(h, gfp_mask, preferred_nid, nmask);
 }
 
 /* mempolicy aware migration callback */
@@ -2357,16 +2357,16 @@ struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma,
 {
 	struct mempolicy *mpol;
 	nodemask_t *nodemask;
-	struct page *page;
+	struct folio *folio;
 	gfp_t gfp_mask;
 	int node;
 
 	gfp_mask = htlb_alloc_mask(h);
 	node = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
-	page = alloc_huge_page_nodemask(h, node, nodemask, gfp_mask);
+	folio = alloc_hugetlb_folio_nodemask(h, node, nodemask, gfp_mask);
 	mpol_cond_put(mpol);
 
-	return page;
+	return &folio->page;
 }
 
 /*
diff --git a/mm/migrate.c b/mm/migrate.c
index 6932b3d5a9dd..fab706b78be1 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1622,6 +1622,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
 	struct migration_target_control *mtc;
 	gfp_t gfp_mask;
 	unsigned int order = 0;
+	struct folio *hugetlb_folio = NULL;
 	struct folio *new_folio = NULL;
 	int nid;
 	int zidx;
@@ -1636,7 +1637,9 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
 		struct hstate *h = folio_hstate(folio);
 
 		gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
-		return alloc_huge_page_nodemask(h, nid, mtc->nmask, gfp_mask);
+		hugetlb_folio = alloc_hugetlb_folio_nodemask(h, nid,
+						mtc->nmask, gfp_mask);
+		return &hugetlb_folio->page;
 	}
 
 	if (folio_test_large(folio)) {
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH mm-unstable 7/8] mm/hugetlb: convert restore_reserve_on_error() to folios
  2023-01-03 19:13 [PATCH mm-unstable 0/9] continue hugetlb folio conversions Sidhartha Kumar
                   ` (5 preceding siblings ...)
  2023-01-03 19:13 ` [PATCH mm-unstable 6/8] mm/hugetlb: convert alloc_migrate_huge_page to folios Sidhartha Kumar
@ 2023-01-03 19:13 ` Sidhartha Kumar
  2023-01-07  0:57   ` Mike Kravetz
  2023-01-03 19:13 ` [PATCH mm-unstable 8/8] mm/hugetlb: convert demote_free_huge_page " Sidhartha Kumar
  7 siblings, 1 reply; 27+ messages in thread
From: Sidhartha Kumar @ 2023-01-03 19:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, tsahu, jhubbard, Sidhartha Kumar

Use the hugetlb folio flag macros inside restore_reserve_on_error() and
update the comments to reflect the use of folios.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 mm/hugetlb.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0db01718d1c3..2bb69b098117 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2678,22 +2678,23 @@ static long vma_del_reservation(struct hstate *h,
 void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma,
 			unsigned long address, struct page *page)
 {
+	struct folio *folio = page_folio(page);
 	long rc = vma_needs_reservation(h, vma, address);
 
-	if (HPageRestoreReserve(page)) {
+	if (folio_test_hugetlb_restore_reserve(folio)) {
 		if (unlikely(rc < 0))
 			/*
 			 * Rare out of memory condition in reserve map
-			 * manipulation.  Clear HPageRestoreReserve so that
-			 * global reserve count will not be incremented
+			 * manipulation.  Clear hugetlb_restore_reserve so
+			 * that global reserve count will not be incremented
 			 * by free_huge_page.  This will make it appear
-			 * as though the reservation for this page was
+			 * as though the reservation for this folio was
 			 * consumed.  This may prevent the task from
-			 * faulting in the page at a later time.  This
+			 * faulting in the folio at a later time.  This
 			 * is better than inconsistent global huge page
 			 * accounting of reserve counts.
 			 */
-			ClearHPageRestoreReserve(page);
+			folio_clear_hugetlb_restore_reserve(folio);
 		else if (rc)
 			(void)vma_add_reservation(h, vma, address);
 		else
@@ -2704,7 +2705,7 @@ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma,
 			 * This indicates there is an entry in the reserve map
 			 * not added by alloc_huge_page.  We know it was added
 			 * before the alloc_huge_page call, otherwise
-			 * HPageRestoreReserve would be set on the page.
+			 * hugetlb_restore_reserve would be set on the folio.
 			 * Remove the entry so that a subsequent allocation
 			 * does not consume a reservation.
 			 */
@@ -2713,12 +2714,12 @@ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma,
 				/*
 				 * VERY rare out of memory condition.  Since
 				 * we can not delete the entry, set
-				 * HPageRestoreReserve so that the reserve
-				 * count will be incremented when the page
+				 * hugetlb_restore_reserve so that the reserve
+				 * count will be incremented when the folio
 				 * is freed.  This reserve will be consumed
 				 * on a subsequent allocation.
 				 */
-				SetHPageRestoreReserve(page);
+				folio_set_hugetlb_restore_reserve(folio);
 		} else if (rc < 0) {
 			/*
 			 * Rare out of memory condition from
@@ -2734,12 +2735,12 @@ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma,
 				/*
 				 * For private mappings, no entry indicates
 				 * a reservation is present.  Since we can
-				 * not add an entry, set SetHPageRestoreReserve
-				 * on the page so reserve count will be
+				 * not add an entry, set hugetlb_restore_reserve
+				 * on the folio so reserve count will be
 				 * incremented when freed.  This reserve will
 				 * be consumed on a subsequent allocation.
 				 */
-				SetHPageRestoreReserve(page);
+				folio_set_hugetlb_restore_reserve(folio);
 		} else
 			/*
 			 * No reservation present, do nothing
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH mm-unstable 8/8] mm/hugetlb: convert demote_free_huge_page to folios
  2023-01-03 19:13 [PATCH mm-unstable 0/9] continue hugetlb folio conversions Sidhartha Kumar
                   ` (6 preceding siblings ...)
  2023-01-03 19:13 ` [PATCH mm-unstable 7/8] mm/hugetlb: convert restore_reserve_on_error() " Sidhartha Kumar
@ 2023-01-03 19:13 ` Sidhartha Kumar
  2023-01-07  1:11   ` Mike Kravetz
  7 siblings, 1 reply; 27+ messages in thread
From: Sidhartha Kumar @ 2023-01-03 19:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, tsahu, jhubbard, Sidhartha Kumar

Change demote_free_huge_page to demote_free_hugetlb_folio() and change
demote_pool_huge_page() pass in a folio.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 mm/hugetlb.c | 31 ++++++++++++++++---------------
 1 file changed, 16 insertions(+), 15 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 2bb69b098117..a89728c6987d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3438,12 +3438,12 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
 	return 0;
 }
 
-static int demote_free_huge_page(struct hstate *h, struct page *page)
+static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio)
 {
-	int i, nid = page_to_nid(page);
+	int i, nid = folio_nid(folio);
 	struct hstate *target_hstate;
-	struct folio *folio = page_folio(page);
 	struct page *subpage;
+	struct folio *subfolio;
 	int rc = 0;
 
 	target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order);
@@ -3451,18 +3451,18 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
 	remove_hugetlb_folio_for_demote(h, folio, false);
 	spin_unlock_irq(&hugetlb_lock);
 
-	rc = hugetlb_vmemmap_restore(h, page);
+	rc = hugetlb_vmemmap_restore(h, &folio->page);
 	if (rc) {
-		/* Allocation of vmemmmap failed, we can not demote page */
+		/* Allocation of vmemmmap failed, we can not demote folio */
 		spin_lock_irq(&hugetlb_lock);
-		set_page_refcounted(page);
-		add_hugetlb_folio(h, page_folio(page), false);
+		folio_ref_unfreeze(folio, 1);
+		add_hugetlb_folio(h, folio, false);
 		return rc;
 	}
 
 	/*
 	 * Use destroy_compound_hugetlb_folio_for_demote for all huge page
-	 * sizes as it will not ref count pages.
+	 * sizes as it will not ref count folios.
 	 */
 	destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(h));
 
@@ -3477,15 +3477,15 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
 	mutex_lock(&target_hstate->resize_lock);
 	for (i = 0; i < pages_per_huge_page(h);
 				i += pages_per_huge_page(target_hstate)) {
-		subpage = nth_page(page, i);
-		folio = page_folio(subpage);
+		subpage = folio_page(folio, i);
+		subfolio = page_folio(subpage);
 		if (hstate_is_gigantic(target_hstate))
-			prep_compound_gigantic_folio_for_demote(folio,
+			prep_compound_gigantic_folio_for_demote(subfolio,
 							target_hstate->order);
 		else
 			prep_compound_page(subpage, target_hstate->order);
-		set_page_private(subpage, 0);
-		prep_new_hugetlb_folio(target_hstate, folio, nid);
+		folio_change_private(subfolio, NULL);
+		prep_new_hugetlb_folio(target_hstate, subfolio, nid);
 		free_huge_page(subpage);
 	}
 	mutex_unlock(&target_hstate->resize_lock);
@@ -3508,6 +3508,7 @@ static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
 {
 	int nr_nodes, node;
 	struct page *page;
+	struct folio *folio;
 
 	lockdep_assert_held(&hugetlb_lock);
 
@@ -3521,8 +3522,8 @@ static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
 		list_for_each_entry(page, &h->hugepage_freelists[node], lru) {
 			if (PageHWPoison(page))
 				continue;
-
-			return demote_free_huge_page(h, page);
+			folio = page_folio(page);
+			return demote_free_hugetlb_folio(h, folio);
 		}
 	}
 
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 1/8] mm/hugetlb: convert isolate_hugetlb to folios
  2023-01-03 19:13 ` [PATCH mm-unstable 1/8] mm/hugetlb: convert isolate_hugetlb to folios Sidhartha Kumar
@ 2023-01-03 20:56   ` Matthew Wilcox
  2023-01-06 23:04   ` Mike Kravetz
  1 sibling, 0 replies; 27+ messages in thread
From: Matthew Wilcox @ 2023-01-03 20:56 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, mike.kravetz, tsahu, jhubbard

On Tue, Jan 03, 2023 at 01:13:33PM -0600, Sidhartha Kumar wrote:
> +++ b/include/linux/mm.h
> @@ -775,6 +775,11 @@ static inline bool get_page_unless_zero(struct page *page)
>  	return page_ref_add_unless(page, 1, 0);
>  }
>  
> +static inline bool get_folio_unless_zero(struct folio *folio)
> +{
> +	return folio_ref_add_unless(folio, 1, 0);
> +}
> +

I think that's folio_try_get() in linux/page_ref.h.

The rest looks good though.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 3/8] mm/hugetlb: convert dequeue_hugetlb_page_node functions to folios
  2023-01-03 19:13 ` [PATCH mm-unstable 3/8] mm/hugetlb: convert dequeue_hugetlb_page_node functions " Sidhartha Kumar
@ 2023-01-03 21:00   ` Matthew Wilcox
  2023-01-06 23:57     ` Mike Kravetz
  0 siblings, 1 reply; 27+ messages in thread
From: Matthew Wilcox @ 2023-01-03 21:00 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, mike.kravetz, tsahu, jhubbard

On Tue, Jan 03, 2023 at 01:13:35PM -0600, Sidhartha Kumar wrote:
> +static struct folio *dequeue_hugetlb_folio_node_exact(struct hstate *h,
> +								int nid)
>  {
>  	struct page *page;
> +	struct folio *folio;
>  	bool pin = !!(current->flags & PF_MEMALLOC_PIN);
>  
>  	lockdep_assert_held(&hugetlb_lock);
>  	list_for_each_entry(page, &h->hugepage_freelists[nid], lru) {
> -		if (pin && !is_longterm_pinnable_page(page))
> +		folio = page_folio(page);

I'd argue that you can pull folios directly off the hugepage_freelists.
Since they're attached through the 'lru', you know they're not tail
pages, because lru.prev aliases with compound_head.

The rest looks good.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 1/8] mm/hugetlb: convert isolate_hugetlb to folios
  2023-01-03 19:13 ` [PATCH mm-unstable 1/8] mm/hugetlb: convert isolate_hugetlb to folios Sidhartha Kumar
  2023-01-03 20:56   ` Matthew Wilcox
@ 2023-01-06 23:04   ` Mike Kravetz
  1 sibling, 0 replies; 27+ messages in thread
From: Mike Kravetz @ 2023-01-06 23:04 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, tsahu, jhubbard

On 01/03/23 13:13, Sidhartha Kumar wrote:
> Convert isolate_hugetlb() to take in a folio and convert its callers to
> pass a folio. Using page_folio() to convert the callers to use a folio is
> safe as isolate_hugetlb() operates on a head page.
> 
> Also add a folio equivalent of get_page_unless_zero().
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  include/linux/hugetlb.h |  4 ++--
>  include/linux/mm.h      |  5 +++++
>  mm/gup.c                |  2 +-
>  mm/hugetlb.c            | 16 ++++++++--------
>  mm/memory-failure.c     |  2 +-
>  mm/memory_hotplug.c     |  2 +-
>  mm/mempolicy.c          |  2 +-
>  mm/migrate.c            |  2 +-
>  8 files changed, 20 insertions(+), 15 deletions(-)

The hugetlb parts look fine to me.  If you address the get_folio_unless_zero
issue pointed out by Matthew,

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 2/8] mm/hugetlb: convert __update_and_free_page() to folios
  2023-01-03 19:13 ` [PATCH mm-unstable 2/8] mm/hugetlb: convert __update_and_free_page() " Sidhartha Kumar
@ 2023-01-06 23:46   ` Mike Kravetz
  0 siblings, 0 replies; 27+ messages in thread
From: Mike Kravetz @ 2023-01-06 23:46 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, tsahu, jhubbard

On 01/03/23 13:13, Sidhartha Kumar wrote:
> Change __update_and_free_page() to __update_and_free_hugetlb_folio() by
> changing its callers to pass in a folio.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  mm/hugetlb.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)

Thanks,

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 3/8] mm/hugetlb: convert dequeue_hugetlb_page_node functions to folios
  2023-01-03 21:00   ` Matthew Wilcox
@ 2023-01-06 23:57     ` Mike Kravetz
  0 siblings, 0 replies; 27+ messages in thread
From: Mike Kravetz @ 2023-01-06 23:57 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Sidhartha Kumar, linux-kernel, linux-mm, akpm, songmuchun, tsahu,
	jhubbard

On 01/03/23 21:00, Matthew Wilcox wrote:
> On Tue, Jan 03, 2023 at 01:13:35PM -0600, Sidhartha Kumar wrote:
> > +static struct folio *dequeue_hugetlb_folio_node_exact(struct hstate *h,
> > +								int nid)
> >  {
> >  	struct page *page;
> > +	struct folio *folio;
> >  	bool pin = !!(current->flags & PF_MEMALLOC_PIN);
> >  
> >  	lockdep_assert_held(&hugetlb_lock);
> >  	list_for_each_entry(page, &h->hugepage_freelists[nid], lru) {
> > -		if (pin && !is_longterm_pinnable_page(page))
> > +		folio = page_folio(page);
> 
> I'd argue that you can pull folios directly off the hugepage_freelists.
> Since they're attached through the 'lru', you know they're not tail
> pages, because lru.prev aliases with compound_head.

Yes, then we can get rid of the local variable *page.

A quick grep shows only the routine __mem_cgroup_uncharge_list() does
this today.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 4/8] mm/hugetlb: convert alloc_surplus_huge_page() to folios
  2023-01-03 19:13 ` [PATCH mm-unstable 4/8] mm/hugetlb: convert alloc_surplus_huge_page() " Sidhartha Kumar
@ 2023-01-07  0:15   ` Mike Kravetz
  0 siblings, 0 replies; 27+ messages in thread
From: Mike Kravetz @ 2023-01-07  0:15 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, tsahu, jhubbard

On 01/03/23 13:13, Sidhartha Kumar wrote:
> Change alloc_surplus_huge_page() to alloc_surplus_hugetlb_folio() and
> update its callers.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  mm/hugetlb.c | 27 ++++++++++++++-------------
>  1 file changed, 14 insertions(+), 13 deletions(-)

Looks good to me,

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 5/8] mm/hugetlb: increase use of folios in alloc_huge_page()
  2023-01-03 19:13 ` [PATCH mm-unstable 5/8] mm/hugetlb: increase use of folios in alloc_huge_page() Sidhartha Kumar
@ 2023-01-07  0:30   ` Mike Kravetz
  0 siblings, 0 replies; 27+ messages in thread
From: Mike Kravetz @ 2023-01-07  0:30 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, tsahu, jhubbard

On 01/03/23 13:13, Sidhartha Kumar wrote:
> Change hugetlb_cgroup_commit_charge{,_rsvd}(), dequeue_huge_page_vma()
> and alloc_buddy_huge_page_with_mpol() to use folios

Nice that the only 'conversion' was to eliminate the page to folio or
folio to page calls in those routines.

>                                                     so alloc_huge_page()
> is cleaned by operating on folios until its return.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  include/linux/hugetlb_cgroup.h |  8 ++++----
>  mm/hugetlb.c                   | 33 ++++++++++++++++-----------------
>  mm/hugetlb_cgroup.c            |  8 ++------
>  3 files changed, 22 insertions(+), 27 deletions(-)

Thanks,

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-- 
Mike Kravetz
> 
> diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
> index f706626a8063..3d82d91f49ac 100644
> --- a/include/linux/hugetlb_cgroup.h
> +++ b/include/linux/hugetlb_cgroup.h
> @@ -141,10 +141,10 @@ extern int hugetlb_cgroup_charge_cgroup_rsvd(int idx, unsigned long nr_pages,
>  					     struct hugetlb_cgroup **ptr);
>  extern void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
>  					 struct hugetlb_cgroup *h_cg,
> -					 struct page *page);
> +					 struct folio *folio);
>  extern void hugetlb_cgroup_commit_charge_rsvd(int idx, unsigned long nr_pages,
>  					      struct hugetlb_cgroup *h_cg,
> -					      struct page *page);
> +					      struct folio *folio);
>  extern void hugetlb_cgroup_uncharge_folio(int idx, unsigned long nr_pages,
>  					 struct folio *folio);
>  extern void hugetlb_cgroup_uncharge_folio_rsvd(int idx, unsigned long nr_pages,
> @@ -230,14 +230,14 @@ static inline int hugetlb_cgroup_charge_cgroup_rsvd(int idx,
>  
>  static inline void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
>  						struct hugetlb_cgroup *h_cg,
> -						struct page *page)
> +						struct folio *folio)
>  {
>  }
>  
>  static inline void
>  hugetlb_cgroup_commit_charge_rsvd(int idx, unsigned long nr_pages,
>  				  struct hugetlb_cgroup *h_cg,
> -				  struct page *page)
> +				  struct folio *folio)
>  {
>  }
>  
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 0b8bab52bc7e..640ca4eaccf2 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1209,7 +1209,7 @@ static unsigned long available_huge_pages(struct hstate *h)
>  	return h->free_huge_pages - h->resv_huge_pages;
>  }
>  
> -static struct page *dequeue_huge_page_vma(struct hstate *h,
> +static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
>  				struct vm_area_struct *vma,
>  				unsigned long address, int avoid_reserve,
>  				long chg)
> @@ -1253,7 +1253,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h,
>  	}
>  
>  	mpol_cond_put(mpol);
> -	return &folio->page;
> +	return folio;
>  
>  err:
>  	return NULL;
> @@ -2305,7 +2305,7 @@ static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask,
>   * Use the VMA's mpolicy to allocate a huge page from the buddy.
>   */
>  static
> -struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h,
> +struct folio *alloc_buddy_hugetlb_folio_with_mpol(struct hstate *h,
>  		struct vm_area_struct *vma, unsigned long addr)
>  {
>  	struct folio *folio = NULL;
> @@ -2328,7 +2328,7 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h,
>  	if (!folio)
>  		folio = alloc_surplus_hugetlb_folio(h, gfp_mask, nid, nodemask);
>  	mpol_cond_put(mpol);
> -	return &folio->page;
> +	return folio;
>  }
>  
>  /* page migration callback function */
> @@ -2877,7 +2877,6 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
>  {
>  	struct hugepage_subpool *spool = subpool_vma(vma);
>  	struct hstate *h = hstate_vma(vma);
> -	struct page *page;
>  	struct folio *folio;
>  	long map_chg, map_commit;
>  	long gbl_chg;
> @@ -2941,34 +2940,34 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
>  	 * from the global free pool (global change).  gbl_chg == 0 indicates
>  	 * a reservation exists for the allocation.
>  	 */
> -	page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg);
> -	if (!page) {
> +	folio = dequeue_hugetlb_folio_vma(h, vma, addr, avoid_reserve, gbl_chg);
> +	if (!folio) {
>  		spin_unlock_irq(&hugetlb_lock);
> -		page = alloc_buddy_huge_page_with_mpol(h, vma, addr);
> -		if (!page)
> +		folio = alloc_buddy_hugetlb_folio_with_mpol(h, vma, addr);
> +		if (!folio)
>  			goto out_uncharge_cgroup;
>  		spin_lock_irq(&hugetlb_lock);
>  		if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
> -			SetHPageRestoreReserve(page);
> +			folio_set_hugetlb_restore_reserve(folio);
>  			h->resv_huge_pages--;
>  		}
> -		list_add(&page->lru, &h->hugepage_activelist);
> -		set_page_refcounted(page);
> +		list_add(&folio->lru, &h->hugepage_activelist);
> +		folio_ref_unfreeze(folio, 1);
>  		/* Fall through */
>  	}
> -	folio = page_folio(page);
> -	hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, page);
> +
> +	hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, folio);
>  	/* If allocation is not consuming a reservation, also store the
>  	 * hugetlb_cgroup pointer on the page.
>  	 */
>  	if (deferred_reserve) {
>  		hugetlb_cgroup_commit_charge_rsvd(idx, pages_per_huge_page(h),
> -						  h_cg, page);
> +						  h_cg, folio);
>  	}
>  
>  	spin_unlock_irq(&hugetlb_lock);
>  
> -	hugetlb_set_page_subpool(page, spool);
> +	hugetlb_set_folio_subpool(folio, spool);
>  
>  	map_commit = vma_commit_reservation(h, vma, addr);
>  	if (unlikely(map_chg > map_commit)) {
> @@ -2989,7 +2988,7 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
>  			hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h),
>  					pages_per_huge_page(h), folio);
>  	}
> -	return page;
> +	return &folio->page;
>  
>  out_uncharge_cgroup:
>  	hugetlb_cgroup_uncharge_cgroup(idx, pages_per_huge_page(h), h_cg);
> diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> index d9e4425d81ac..dedd2edb076e 100644
> --- a/mm/hugetlb_cgroup.c
> +++ b/mm/hugetlb_cgroup.c
> @@ -331,19 +331,15 @@ static void __hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
>  
>  void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
>  				  struct hugetlb_cgroup *h_cg,
> -				  struct page *page)
> +				  struct folio *folio)
>  {
> -	struct folio *folio = page_folio(page);
> -
>  	__hugetlb_cgroup_commit_charge(idx, nr_pages, h_cg, folio, false);
>  }
>  
>  void hugetlb_cgroup_commit_charge_rsvd(int idx, unsigned long nr_pages,
>  				       struct hugetlb_cgroup *h_cg,
> -				       struct page *page)
> +				       struct folio *folio)
>  {
> -	struct folio *folio = page_folio(page);
> -
>  	__hugetlb_cgroup_commit_charge(idx, nr_pages, h_cg, folio, true);
>  }
>  
> -- 
> 2.39.0
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 6/8] mm/hugetlb: convert alloc_migrate_huge_page to folios
  2023-01-03 19:13 ` [PATCH mm-unstable 6/8] mm/hugetlb: convert alloc_migrate_huge_page to folios Sidhartha Kumar
@ 2023-01-07  0:54   ` Mike Kravetz
  2023-01-09 16:26     ` Sidhartha Kumar
  0 siblings, 1 reply; 27+ messages in thread
From: Mike Kravetz @ 2023-01-07  0:54 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, tsahu, jhubbard

On 01/03/23 13:13, Sidhartha Kumar wrote:
> Change alloc_huge_page_nodemask() to alloc_hugetlb_folio_nodemask() and
> alloc_migrate_huge_page() to alloc_migrate_hugetlb_folio(). Both functions
> now return a folio rather than a page.

>  /* mempolicy aware migration callback */
> @@ -2357,16 +2357,16 @@ struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma,
>  {
>  	struct mempolicy *mpol;
>  	nodemask_t *nodemask;
> -	struct page *page;
> +	struct folio *folio;
>  	gfp_t gfp_mask;
>  	int node;
>  
>  	gfp_mask = htlb_alloc_mask(h);
>  	node = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
> -	page = alloc_huge_page_nodemask(h, node, nodemask, gfp_mask);
> +	folio = alloc_hugetlb_folio_nodemask(h, node, nodemask, gfp_mask);
>  	mpol_cond_put(mpol);
>  
> -	return page;
> +	return &folio->page;

Is it possible that folio could be NULL here and cause addressing exception?

> diff --git a/mm/migrate.c b/mm/migrate.c
> index 6932b3d5a9dd..fab706b78be1 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1622,6 +1622,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
>  	struct migration_target_control *mtc;
>  	gfp_t gfp_mask;
>  	unsigned int order = 0;
> +	struct folio *hugetlb_folio = NULL;
>  	struct folio *new_folio = NULL;
>  	int nid;
>  	int zidx;
> @@ -1636,7 +1637,9 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
>  		struct hstate *h = folio_hstate(folio);
>  
>  		gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
> -		return alloc_huge_page_nodemask(h, nid, mtc->nmask, gfp_mask);
> +		hugetlb_folio = alloc_hugetlb_folio_nodemask(h, nid,
> +						mtc->nmask, gfp_mask);
> +		return &hugetlb_folio->page;

and, here as well?
-- 
Mike Kravetz

>  	}
>  
>  	if (folio_test_large(folio)) {
> -- 
> 2.39.0
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 7/8] mm/hugetlb: convert restore_reserve_on_error() to folios
  2023-01-03 19:13 ` [PATCH mm-unstable 7/8] mm/hugetlb: convert restore_reserve_on_error() " Sidhartha Kumar
@ 2023-01-07  0:57   ` Mike Kravetz
  0 siblings, 0 replies; 27+ messages in thread
From: Mike Kravetz @ 2023-01-07  0:57 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, tsahu, jhubbard

On 01/03/23 13:13, Sidhartha Kumar wrote:
> Use the hugetlb folio flag macros inside restore_reserve_on_error() and
> update the comments to reflect the use of folios.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  mm/hugetlb.c | 27 ++++++++++++++-------------
>  1 file changed, 14 insertions(+), 13 deletions(-)

Looks fine,

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 8/8] mm/hugetlb: convert demote_free_huge_page to folios
  2023-01-03 19:13 ` [PATCH mm-unstable 8/8] mm/hugetlb: convert demote_free_huge_page " Sidhartha Kumar
@ 2023-01-07  1:11   ` Mike Kravetz
  2023-01-07  1:31     ` Matthew Wilcox
  0 siblings, 1 reply; 27+ messages in thread
From: Mike Kravetz @ 2023-01-07  1:11 UTC (permalink / raw)
  To: Sidhartha Kumar, willy
  Cc: linux-kernel, linux-mm, akpm, songmuchun, tsahu, jhubbard

On 01/03/23 13:13, Sidhartha Kumar wrote:
> Change demote_free_huge_page to demote_free_hugetlb_folio() and change
> demote_pool_huge_page() pass in a folio.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  mm/hugetlb.c | 31 ++++++++++++++++---------------
>  1 file changed, 16 insertions(+), 15 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 2bb69b098117..a89728c6987d 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3438,12 +3438,12 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
>  	return 0;
>  }
>  
> -static int demote_free_huge_page(struct hstate *h, struct page *page)
> +static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio)
>  {
> -	int i, nid = page_to_nid(page);
> +	int i, nid = folio_nid(folio);
>  	struct hstate *target_hstate;
> -	struct folio *folio = page_folio(page);
>  	struct page *subpage;
> +	struct folio *subfolio;
>  	int rc = 0;
>  
>  	target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order);
> @@ -3451,18 +3451,18 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
>  	remove_hugetlb_folio_for_demote(h, folio, false);
>  	spin_unlock_irq(&hugetlb_lock);
>  
> -	rc = hugetlb_vmemmap_restore(h, page);
> +	rc = hugetlb_vmemmap_restore(h, &folio->page);
>  	if (rc) {
> -		/* Allocation of vmemmmap failed, we can not demote page */
> +		/* Allocation of vmemmmap failed, we can not demote folio */
>  		spin_lock_irq(&hugetlb_lock);
> -		set_page_refcounted(page);
> -		add_hugetlb_folio(h, page_folio(page), false);
> +		folio_ref_unfreeze(folio, 1);
> +		add_hugetlb_folio(h, folio, false);
>  		return rc;
>  	}
>  
>  	/*
>  	 * Use destroy_compound_hugetlb_folio_for_demote for all huge page
> -	 * sizes as it will not ref count pages.
> +	 * sizes as it will not ref count folios.
>  	 */
>  	destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(h));
>  
> @@ -3477,15 +3477,15 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
>  	mutex_lock(&target_hstate->resize_lock);
>  	for (i = 0; i < pages_per_huge_page(h);
>  				i += pages_per_huge_page(target_hstate)) {
> -		subpage = nth_page(page, i);
> -		folio = page_folio(subpage);
> +		subpage = folio_page(folio, i);
> +		subfolio = page_folio(subpage);

No problems with the code, but I am not in love with the name subfolio.
I know it is patterned after 'subpage'.  For better or worse, the term
subpage is used throughout the kernel.  This would be the first usage of
the term 'subfolio'.

Matthew do you have any comments on the naming?  It is local to hugetlb,
but I would hate to see use of the term subfolio based on its introduction
here.
-- 
Mike Kravetz


>  		if (hstate_is_gigantic(target_hstate))
> -			prep_compound_gigantic_folio_for_demote(folio,
> +			prep_compound_gigantic_folio_for_demote(subfolio,
>  							target_hstate->order);
>  		else
>  			prep_compound_page(subpage, target_hstate->order);
> -		set_page_private(subpage, 0);
> -		prep_new_hugetlb_folio(target_hstate, folio, nid);
> +		folio_change_private(subfolio, NULL);
> +		prep_new_hugetlb_folio(target_hstate, subfolio, nid);
>  		free_huge_page(subpage);
>  	}
>  	mutex_unlock(&target_hstate->resize_lock);
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 8/8] mm/hugetlb: convert demote_free_huge_page to folios
  2023-01-07  1:11   ` Mike Kravetz
@ 2023-01-07  1:31     ` Matthew Wilcox
  2023-01-07 20:55       ` Mike Kravetz
  0 siblings, 1 reply; 27+ messages in thread
From: Matthew Wilcox @ 2023-01-07  1:31 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Sidhartha Kumar, linux-kernel, linux-mm, akpm, songmuchun, tsahu,
	jhubbard

On Fri, Jan 06, 2023 at 05:11:36PM -0800, Mike Kravetz wrote:
> On 01/03/23 13:13, Sidhartha Kumar wrote:
> > @@ -3477,15 +3477,15 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
> >  	mutex_lock(&target_hstate->resize_lock);
> >  	for (i = 0; i < pages_per_huge_page(h);
> >  				i += pages_per_huge_page(target_hstate)) {
> > -		subpage = nth_page(page, i);
> > -		folio = page_folio(subpage);
> > +		subpage = folio_page(folio, i);
> > +		subfolio = page_folio(subpage);
> 
> No problems with the code, but I am not in love with the name subfolio.
> I know it is patterned after 'subpage'.  For better or worse, the term
> subpage is used throughout the kernel.  This would be the first usage of
> the term 'subfolio'.
> 
> Matthew do you have any comments on the naming?  It is local to hugetlb,
> but I would hate to see use of the term subfolio based on its introduction
> here.

I'm really not a fan of it either.  I intended to dive into this patch
and understand the function it's modifying, in the hopes of suggesting
a better name and/or method.

Since I haven't done that yet, maybe "new" or "dest" names work?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 8/8] mm/hugetlb: convert demote_free_huge_page to folios
  2023-01-07  1:31     ` Matthew Wilcox
@ 2023-01-07 20:55       ` Mike Kravetz
  2023-01-09 16:36         ` Sidhartha Kumar
  0 siblings, 1 reply; 27+ messages in thread
From: Mike Kravetz @ 2023-01-07 20:55 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Sidhartha Kumar, linux-kernel, linux-mm, akpm, songmuchun, tsahu,
	jhubbard

On 01/07/23 01:31, Matthew Wilcox wrote:
> On Fri, Jan 06, 2023 at 05:11:36PM -0800, Mike Kravetz wrote:
> > On 01/03/23 13:13, Sidhartha Kumar wrote:
> > > @@ -3477,15 +3477,15 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
> > >  	mutex_lock(&target_hstate->resize_lock);
> > >  	for (i = 0; i < pages_per_huge_page(h);
> > >  				i += pages_per_huge_page(target_hstate)) {
> > > -		subpage = nth_page(page, i);
> > > -		folio = page_folio(subpage);
> > > +		subpage = folio_page(folio, i);
> > > +		subfolio = page_folio(subpage);
> > 
> > No problems with the code, but I am not in love with the name subfolio.
> > I know it is patterned after 'subpage'.  For better or worse, the term
> > subpage is used throughout the kernel.  This would be the first usage of
> > the term 'subfolio'.
> > 
> > Matthew do you have any comments on the naming?  It is local to hugetlb,
> > but I would hate to see use of the term subfolio based on its introduction
> > here.
> 
> I'm really not a fan of it either.  I intended to dive into this patch
> and understand the function it's modifying, in the hopes of suggesting
> a better name and/or method.

At a high level, this routine is splitting a very large folio (1G for
example) into multiple large folios of a smaller size (512 2M folios for
example).  The loop is iterating through the very large folio at
increments of the smaller large folio.  subfolio (previously subpage) is
used to point to the smaller large folio within the loop.

-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 6/8] mm/hugetlb: convert alloc_migrate_huge_page to folios
  2023-01-07  0:54   ` Mike Kravetz
@ 2023-01-09 16:26     ` Sidhartha Kumar
  2023-01-09 18:21       ` Mike Kravetz
  0 siblings, 1 reply; 27+ messages in thread
From: Sidhartha Kumar @ 2023-01-09 16:26 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, tsahu, jhubbard

On 1/6/23 6:54 PM, Mike Kravetz wrote:
> On 01/03/23 13:13, Sidhartha Kumar wrote:
>> Change alloc_huge_page_nodemask() to alloc_hugetlb_folio_nodemask() and
>> alloc_migrate_huge_page() to alloc_migrate_hugetlb_folio(). Both functions
>> now return a folio rather than a page.
> 
>>   /* mempolicy aware migration callback */
>> @@ -2357,16 +2357,16 @@ struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma,
>>   {
>>   	struct mempolicy *mpol;
>>   	nodemask_t *nodemask;
>> -	struct page *page;
>> +	struct folio *folio;
>>   	gfp_t gfp_mask;
>>   	int node;
>>   
>>   	gfp_mask = htlb_alloc_mask(h);
>>   	node = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
>> -	page = alloc_huge_page_nodemask(h, node, nodemask, gfp_mask);
>> +	folio = alloc_hugetlb_folio_nodemask(h, node, nodemask, gfp_mask);
>>   	mpol_cond_put(mpol);
>>   
>> -	return page;
>> +	return &folio->page;
> 
> Is it possible that folio could be NULL here and cause addressing exception?
> 
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 6932b3d5a9dd..fab706b78be1 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -1622,6 +1622,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
>>   	struct migration_target_control *mtc;
>>   	gfp_t gfp_mask;
>>   	unsigned int order = 0;
>> +	struct folio *hugetlb_folio = NULL;
>>   	struct folio *new_folio = NULL;
>>   	int nid;
>>   	int zidx;
>> @@ -1636,7 +1637,9 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
>>   		struct hstate *h = folio_hstate(folio);
>>   
>>   		gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
>> -		return alloc_huge_page_nodemask(h, nid, mtc->nmask, gfp_mask);
>> +		hugetlb_folio = alloc_hugetlb_folio_nodemask(h, nid,
>> +						mtc->nmask, gfp_mask);
>> +		return &hugetlb_folio->page;
> 
> and, here as well?

Hi Mike,

It is possible that the folio could be null but I believe these 
instances would not cause an addressing exception because as described 
in [1], &folio->page is safe even if the folio is NULL as the page 
offset is at 0.


[1] https://lore.kernel.org/lkml/Y7h4jsv6jl0XSIsk@casper.infradead.org/T/

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 8/8] mm/hugetlb: convert demote_free_huge_page to folios
  2023-01-07 20:55       ` Mike Kravetz
@ 2023-01-09 16:36         ` Sidhartha Kumar
  2023-01-09 18:23           ` Mike Kravetz
  0 siblings, 1 reply; 27+ messages in thread
From: Sidhartha Kumar @ 2023-01-09 16:36 UTC (permalink / raw)
  To: Mike Kravetz, Matthew Wilcox
  Cc: linux-kernel, linux-mm, akpm, songmuchun, tsahu, jhubbard

On 1/7/23 2:55 PM, Mike Kravetz wrote:
> On 01/07/23 01:31, Matthew Wilcox wrote:
>> On Fri, Jan 06, 2023 at 05:11:36PM -0800, Mike Kravetz wrote:
>>> On 01/03/23 13:13, Sidhartha Kumar wrote:
>>>> @@ -3477,15 +3477,15 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
>>>>   	mutex_lock(&target_hstate->resize_lock);
>>>>   	for (i = 0; i < pages_per_huge_page(h);
>>>>   				i += pages_per_huge_page(target_hstate)) {
>>>> -		subpage = nth_page(page, i);
>>>> -		folio = page_folio(subpage);
>>>> +		subpage = folio_page(folio, i);
>>>> +		subfolio = page_folio(subpage);
>>>
>>> No problems with the code, but I am not in love with the name subfolio.
>>> I know it is patterned after 'subpage'.  For better or worse, the term
>>> subpage is used throughout the kernel.  This would be the first usage of
>>> the term 'subfolio'.
>>>
>>> Matthew do you have any comments on the naming?  It is local to hugetlb,
>>> but I would hate to see use of the term subfolio based on its introduction
>>> here.
>>
>> I'm really not a fan of it either.  I intended to dive into this patch
>> and understand the function it's modifying, in the hopes of suggesting
>> a better name and/or method.
> 
> At a high level, this routine is splitting a very large folio (1G for
> example) into multiple large folios of a smaller size (512 2M folios for
> example).  The loop is iterating through the very large folio at
> increments of the smaller large folio.  subfolio (previously subpage) is
> used to point to the smaller large folio within the loop.
> 
If folio does not need to be part of the variable name, how about 
something like 'demote_target'? The prep call inside the loop would then 
look like:

prep_new_hugetlb_folio(target_hstate, demote_target, nid);

so it is still clear that demote_target is a folio. A more concise 
version could also be 'demote_dst' but that seems more ambiguous than 
target.

Thanks,
Sidhartha Kumar

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 6/8] mm/hugetlb: convert alloc_migrate_huge_page to folios
  2023-01-09 16:26     ` Sidhartha Kumar
@ 2023-01-09 18:21       ` Mike Kravetz
  0 siblings, 0 replies; 27+ messages in thread
From: Mike Kravetz @ 2023-01-09 18:21 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, tsahu, jhubbard

On 01/09/23 10:26, Sidhartha Kumar wrote:
> On 1/6/23 6:54 PM, Mike Kravetz wrote:
> > On 01/03/23 13:13, Sidhartha Kumar wrote:
> > > Change alloc_huge_page_nodemask() to alloc_hugetlb_folio_nodemask() and
> > > alloc_migrate_huge_page() to alloc_migrate_hugetlb_folio(). Both functions
> > > now return a folio rather than a page.
> > 
> > >   /* mempolicy aware migration callback */
> > > @@ -2357,16 +2357,16 @@ struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma,
> > >   {
> > >   	struct mempolicy *mpol;
> > >   	nodemask_t *nodemask;
> > > -	struct page *page;
> > > +	struct folio *folio;
> > >   	gfp_t gfp_mask;
> > >   	int node;
> > >   	gfp_mask = htlb_alloc_mask(h);
> > >   	node = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
> > > -	page = alloc_huge_page_nodemask(h, node, nodemask, gfp_mask);
> > > +	folio = alloc_hugetlb_folio_nodemask(h, node, nodemask, gfp_mask);
> > >   	mpol_cond_put(mpol);
> > > -	return page;
> > > +	return &folio->page;
> > 
> > Is it possible that folio could be NULL here and cause addressing exception?
> > 
> > > diff --git a/mm/migrate.c b/mm/migrate.c
> > > index 6932b3d5a9dd..fab706b78be1 100644
> > > --- a/mm/migrate.c
> > > +++ b/mm/migrate.c
> > > @@ -1622,6 +1622,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
> > >   	struct migration_target_control *mtc;
> > >   	gfp_t gfp_mask;
> > >   	unsigned int order = 0;
> > > +	struct folio *hugetlb_folio = NULL;
> > >   	struct folio *new_folio = NULL;
> > >   	int nid;
> > >   	int zidx;
> > > @@ -1636,7 +1637,9 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
> > >   		struct hstate *h = folio_hstate(folio);
> > >   		gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
> > > -		return alloc_huge_page_nodemask(h, nid, mtc->nmask, gfp_mask);
> > > +		hugetlb_folio = alloc_hugetlb_folio_nodemask(h, nid,
> > > +						mtc->nmask, gfp_mask);
> > > +		return &hugetlb_folio->page;
> > 
> > and, here as well?
> 
> Hi Mike,
> 
> It is possible that the folio could be null but I believe these instances
> would not cause an addressing exception because as described in [1],
> &folio->page is safe even if the folio is NULL as the page offset is at 0.
> 
> 
> [1] https://lore.kernel.org/lkml/Y7h4jsv6jl0XSIsk@casper.infradead.org/T/

Thanks!
I did not follow that thread and did not look closely as to whether
&folio->page was safe with folio == NULL.

I must say that it is going to take me some time to not pause and think
when coming upon &folio->page.  Perhaps that is good.

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 8/8] mm/hugetlb: convert demote_free_huge_page to folios
  2023-01-09 16:36         ` Sidhartha Kumar
@ 2023-01-09 18:23           ` Mike Kravetz
  2023-01-09 20:01             ` John Hubbard
  0 siblings, 1 reply; 27+ messages in thread
From: Mike Kravetz @ 2023-01-09 18:23 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: Matthew Wilcox, linux-kernel, linux-mm, akpm, songmuchun, tsahu,
	jhubbard

On 01/09/23 10:36, Sidhartha Kumar wrote:
> On 1/7/23 2:55 PM, Mike Kravetz wrote:
> > On 01/07/23 01:31, Matthew Wilcox wrote:
> > > On Fri, Jan 06, 2023 at 05:11:36PM -0800, Mike Kravetz wrote:
> > > > On 01/03/23 13:13, Sidhartha Kumar wrote:
> > > > > @@ -3477,15 +3477,15 @@ static int demote_free_huge_page(struct hstate *h, struct page *page)
> > > > >   	mutex_lock(&target_hstate->resize_lock);
> > > > >   	for (i = 0; i < pages_per_huge_page(h);
> > > > >   				i += pages_per_huge_page(target_hstate)) {
> > > > > -		subpage = nth_page(page, i);
> > > > > -		folio = page_folio(subpage);
> > > > > +		subpage = folio_page(folio, i);
> > > > > +		subfolio = page_folio(subpage);
> > > > 
> > > > No problems with the code, but I am not in love with the name subfolio.
> > > > I know it is patterned after 'subpage'.  For better or worse, the term
> > > > subpage is used throughout the kernel.  This would be the first usage of
> > > > the term 'subfolio'.
> > > > 
> > > > Matthew do you have any comments on the naming?  It is local to hugetlb,
> > > > but I would hate to see use of the term subfolio based on its introduction
> > > > here.
> > > 
> > > I'm really not a fan of it either.  I intended to dive into this patch
> > > and understand the function it's modifying, in the hopes of suggesting
> > > a better name and/or method.
> > 
> > At a high level, this routine is splitting a very large folio (1G for
> > example) into multiple large folios of a smaller size (512 2M folios for
> > example).  The loop is iterating through the very large folio at
> > increments of the smaller large folio.  subfolio (previously subpage) is
> > used to point to the smaller large folio within the loop.
> > 
> If folio does not need to be part of the variable name, how about something
> like 'demote_target'? The prep call inside the loop would then look like:
> 
> prep_new_hugetlb_folio(target_hstate, demote_target, nid);
> 
> so it is still clear that demote_target is a folio. A more concise version
> could also be 'demote_dst' but that seems more ambiguous than target.

I am OK with that naming.  Primary concern was the introduction of the
term subfolio.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 8/8] mm/hugetlb: convert demote_free_huge_page to folios
  2023-01-09 18:23           ` Mike Kravetz
@ 2023-01-09 20:01             ` John Hubbard
  2023-01-09 20:53               ` Sidhartha Kumar
  0 siblings, 1 reply; 27+ messages in thread
From: John Hubbard @ 2023-01-09 20:01 UTC (permalink / raw)
  To: Mike Kravetz, Sidhartha Kumar
  Cc: Matthew Wilcox, linux-kernel, linux-mm, akpm, songmuchun, tsahu

On 1/9/23 10:23, Mike Kravetz wrote:
>>>>> No problems with the code, but I am not in love with the name subfolio.
>>>>> I know it is patterned after 'subpage'.  For better or worse, the term
>>>>> subpage is used throughout the kernel.  This would be the first usage of
>>>>> the term 'subfolio'.
>>>>>
>>>>> Matthew do you have any comments on the naming?  It is local to hugetlb,
>>>>> but I would hate to see use of the term subfolio based on its introduction
>>>>> here.
>>>>
>>>> I'm really not a fan of it either.  I intended to dive into this patch
>>>> and understand the function it's modifying, in the hopes of suggesting
>>>> a better name and/or method.
>>>
>>> At a high level, this routine is splitting a very large folio (1G for
>>> example) into multiple large folios of a smaller size (512 2M folios for
>>> example).  The loop is iterating through the very large folio at
>>> increments of the smaller large folio.  subfolio (previously subpage) is
>>> used to point to the smaller large folio within the loop.
>>>
>> If folio does not need to be part of the variable name, how about something
>> like 'demote_target'? The prep call inside the loop would then look like:
>>
>> prep_new_hugetlb_folio(target_hstate, demote_target, nid);
>>
>> so it is still clear that demote_target is a folio. A more concise version
>> could also be 'demote_dst' but that seems more ambiguous than target.
> 
> I am OK with that naming.  Primary concern was the introduction of the
> term subfolio.

How about one of these:

     smaller_folio
     inner_folio

Those are more self-explanatory, while still avoiding "subfolio".

thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH mm-unstable 8/8] mm/hugetlb: convert demote_free_huge_page to folios
  2023-01-09 20:01             ` John Hubbard
@ 2023-01-09 20:53               ` Sidhartha Kumar
  0 siblings, 0 replies; 27+ messages in thread
From: Sidhartha Kumar @ 2023-01-09 20:53 UTC (permalink / raw)
  To: John Hubbard, Mike Kravetz
  Cc: Matthew Wilcox, linux-kernel, linux-mm, akpm, songmuchun, tsahu

On 1/9/23 2:01 PM, John Hubbard wrote:
> On 1/9/23 10:23, Mike Kravetz wrote:
>>>>>> No problems with the code, but I am not in love with the name 
>>>>>> subfolio.
>>>>>> I know it is patterned after 'subpage'.  For better or worse, the 
>>>>>> term
>>>>>> subpage is used throughout the kernel.  This would be the first 
>>>>>> usage of
>>>>>> the term 'subfolio'.
>>>>>>
>>>>>> Matthew do you have any comments on the naming?  It is local to 
>>>>>> hugetlb,
>>>>>> but I would hate to see use of the term subfolio based on its 
>>>>>> introduction
>>>>>> here.
>>>>>
>>>>> I'm really not a fan of it either.  I intended to dive into this patch
>>>>> and understand the function it's modifying, in the hopes of suggesting
>>>>> a better name and/or method.
>>>>
>>>> At a high level, this routine is splitting a very large folio (1G for
>>>> example) into multiple large folios of a smaller size (512 2M folios 
>>>> for
>>>> example).  The loop is iterating through the very large folio at
>>>> increments of the smaller large folio.  subfolio (previously 
>>>> subpage) is
>>>> used to point to the smaller large folio within the loop.
>>>>
>>> If folio does not need to be part of the variable name, how about 
>>> something
>>> like 'demote_target'? The prep call inside the loop would then look 
>>> like:
>>>
>>> prep_new_hugetlb_folio(target_hstate, demote_target, nid);
>>>
>>> so it is still clear that demote_target is a folio. A more concise 
>>> version
>>> could also be 'demote_dst' but that seems more ambiguous than target.
>>
>> I am OK with that naming.  Primary concern was the introduction of the
>> term subfolio.
> 
> How about one of these:
> 
>      smaller_folio
>      inner_folio
> 
> Those are more self-explanatory, while still avoiding "subfolio".
> 
I would be fine with inner_folio.

Thanks,
Sidhartha Kumar

> thanks,


^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2023-01-09 21:00 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-03 19:13 [PATCH mm-unstable 0/9] continue hugetlb folio conversions Sidhartha Kumar
2023-01-03 19:13 ` [PATCH mm-unstable 1/8] mm/hugetlb: convert isolate_hugetlb to folios Sidhartha Kumar
2023-01-03 20:56   ` Matthew Wilcox
2023-01-06 23:04   ` Mike Kravetz
2023-01-03 19:13 ` [PATCH mm-unstable 2/8] mm/hugetlb: convert __update_and_free_page() " Sidhartha Kumar
2023-01-06 23:46   ` Mike Kravetz
2023-01-03 19:13 ` [PATCH mm-unstable 3/8] mm/hugetlb: convert dequeue_hugetlb_page_node functions " Sidhartha Kumar
2023-01-03 21:00   ` Matthew Wilcox
2023-01-06 23:57     ` Mike Kravetz
2023-01-03 19:13 ` [PATCH mm-unstable 4/8] mm/hugetlb: convert alloc_surplus_huge_page() " Sidhartha Kumar
2023-01-07  0:15   ` Mike Kravetz
2023-01-03 19:13 ` [PATCH mm-unstable 5/8] mm/hugetlb: increase use of folios in alloc_huge_page() Sidhartha Kumar
2023-01-07  0:30   ` Mike Kravetz
2023-01-03 19:13 ` [PATCH mm-unstable 6/8] mm/hugetlb: convert alloc_migrate_huge_page to folios Sidhartha Kumar
2023-01-07  0:54   ` Mike Kravetz
2023-01-09 16:26     ` Sidhartha Kumar
2023-01-09 18:21       ` Mike Kravetz
2023-01-03 19:13 ` [PATCH mm-unstable 7/8] mm/hugetlb: convert restore_reserve_on_error() " Sidhartha Kumar
2023-01-07  0:57   ` Mike Kravetz
2023-01-03 19:13 ` [PATCH mm-unstable 8/8] mm/hugetlb: convert demote_free_huge_page " Sidhartha Kumar
2023-01-07  1:11   ` Mike Kravetz
2023-01-07  1:31     ` Matthew Wilcox
2023-01-07 20:55       ` Mike Kravetz
2023-01-09 16:36         ` Sidhartha Kumar
2023-01-09 18:23           ` Mike Kravetz
2023-01-09 20:01             ` John Hubbard
2023-01-09 20:53               ` Sidhartha Kumar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).