linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/7] begin converting hugetlb code to folios
@ 2022-08-29 23:00 Sidhartha Kumar
  2022-08-29 23:00 ` [PATCH 1/7] mm/hugetlb: add folio support to hugetlb specific flag macros Sidhartha Kumar
                   ` (6 more replies)
  0 siblings, 7 replies; 18+ messages in thread
From: Sidhartha Kumar @ 2022-08-29 23:00 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, vbabka, william.kucharski,
	dhowells, peterx, arnd, ccross, hughd, ebiederm, Sidhartha Kumar

This patch series starts the conversion of the hugetlb code to operate
on struct folios rather than struct pages. This removes the ambiguitiy
of whether functions are operating on head pages, tail pages of compound
pages, or base pages. 

This series passes the linux test project hugetlb test cases.

Patch 1 adds hugeltb specific page macros that can operate on folios.

Patch 2 adds the private field of the first tail page to struct page.
This patch depends on Matthew Wilcox's patch mm: Add the first tail
page to struct folio[1]:

Patchs 3-4 introduce hugetlb subpool helper functions which operate on
struct folios. These patches were tested using the hugepage-mmap.c
selftest along with the migratepages command.

Patch 5 converts hugetlb_delete_from_page_cache() to use folios.
This patch depends on Mike Kravetz's patch: hugetlb: rename 
remove_huge_page to hugetlb_delete_from_page_cache[2]:


Patch 6 adds a folio_hstate() function to get hstate information from a
folio.

Patch 7 adds a user of folio_hstate().

Bpftrace was used to track time spent in the free_huge_pages function
during the ltp test cases as it is a caller of the hugetlb subpool
functions. From the histogram, the performance is similar before and
after the patch series. 

Time spent in 'free_huge_page'

6.0.0-rc2.master.20220823
@nsecs:
[256, 512)         14770 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[512, 1K)            155 |                                                    |
[1K, 2K)             169 |                                                    |
[2K, 4K)              50 |                                                    |
[4K, 8K)              14 |                                                    |
[8K, 16K)              3 |                                                    |
[16K, 32K)             3 |                                                    |


6.0.0-rc2.master.20220823 + patch series
@nsecs: 
[256, 512)         13678 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[512, 1K)            142 |                                                    |
[1K, 2K)             199 |                                                    |
[2K, 4K)              44 |                                                    |
[4K, 8K)              13 |                                                    |
[8K, 16K)              4 |                                                    |
[16K, 32K)             1 |                                                    |

[1] https://lore.kernel.org/linux-mm/20220808193430.3378317-6-willy@infradead.org/
[2] https://lore.kernel.org/all/20220824175757.20590-4-mike.kravetz@oracle.com/T/#me431952361ea576862d7eb617a5dced9807dbabb

Sidhartha Kumar (7):
  mm/hugetlb: add folio support to hugetlb specific flag macros
  mm: add private field of first tail to struct page and struct folio
  mm/hugetlb: add hugetlb_folio_subpool() helper
  mm/hugetlb: add hugetlb_set_folio_subpool() helper
  mm/hugetlb: convert hugetlb_delete_from_page_cache() to use folios
  mm/hugetlb add folio_hstate()
  mm/migrate: use folio_hstate() in alloc_migration_target()

 fs/hugetlbfs/inode.c     | 22 ++++++++++----------
 include/linux/hugetlb.h  | 45 ++++++++++++++++++++++++++++++++++++----
 include/linux/mm_types.h | 15 ++++++++++++++
 mm/migrate.c             |  2 +-
 4 files changed, 68 insertions(+), 16 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/7] mm/hugetlb: add folio support to hugetlb specific flag macros
  2022-08-29 23:00 [PATCH 0/7] begin converting hugetlb code to folios Sidhartha Kumar
@ 2022-08-29 23:00 ` Sidhartha Kumar
  2022-08-30  3:33   ` Matthew Wilcox
  2022-08-29 23:00 ` [PATCH 2/7] mm: add private field of first tail to struct page and struct folio Sidhartha Kumar
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 18+ messages in thread
From: Sidhartha Kumar @ 2022-08-29 23:00 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, vbabka, william.kucharski,
	dhowells, peterx, arnd, ccross, hughd, ebiederm, Sidhartha Kumar

Allows the macros which test, set, and clear hugetlb specific page
flags to take a hugetlb folio as an input. The marcros are generated as
folio_{test, set, clear}_hugetlb_{restore_reserve, migratable,
temporary, freed, vmemmap_optimized, raw_hwp_unreliable}.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 include/linux/hugetlb.h | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index acace1a25226..ac4e98edd5b0 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -583,26 +583,47 @@ enum hugetlb_page_flags {
  */
 #ifdef CONFIG_HUGETLB_PAGE
 #define TESTHPAGEFLAG(uname, flname)				\
+static __always_inline								\
+int folio_test_hugetlb_##flname(struct folio *folio)		\
+	{	void **private = &folio->private;		\
+		return test_bit(HPG_##flname, (void *)((unsigned long)private));	\
+	}	\
 static inline int HPage##uname(struct page *page)		\
 	{ return test_bit(HPG_##flname, &(page->private)); }
 
 #define SETHPAGEFLAG(uname, flname)				\
+static __always_inline							\
+void folio_set_hugetlb_##flname(struct folio *folio)		\
+	{	void **private = &folio->private;		\
+		set_bit(HPG_##flname, (void *)((unsigned long)private));		\
+	}	\
 static inline void SetHPage##uname(struct page *page)		\
 	{ set_bit(HPG_##flname, &(page->private)); }
 
 #define CLEARHPAGEFLAG(uname, flname)				\
+static __always_inline								\
+void folio_clear_hugetlb_##flname(struct folio *folio)		\
+	{	void **private = &folio->private;		\
+		clear_bit(HPG_##flname, (void *)((unsigned long)private));		\
+	}	\
 static inline void ClearHPage##uname(struct page *page)		\
 	{ clear_bit(HPG_##flname, &(page->private)); }
 #else
 #define TESTHPAGEFLAG(uname, flname)				\
+static inline int folio_test_hugetlb_##flname(struct folio *folio)		\
+	{ return 0; }										\
 static inline int HPage##uname(struct page *page)		\
 	{ return 0; }
 
 #define SETHPAGEFLAG(uname, flname)				\
+static inline void folio_set_hugetlb_##flname(struct folio *folio)		\
+	{ }		                                                \
 static inline void SetHPage##uname(struct page *page)		\
 	{ }
 
 #define CLEARHPAGEFLAG(uname, flname)				\
+static inline void folio_clear_hugetlb_##flname(struct folio *folio)	\
+	{ }		                                                \
 static inline void ClearHPage##uname(struct page *page)		\
 	{ }
 #endif
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/7] mm: add private field of first tail to struct page and struct folio
  2022-08-29 23:00 [PATCH 0/7] begin converting hugetlb code to folios Sidhartha Kumar
  2022-08-29 23:00 ` [PATCH 1/7] mm/hugetlb: add folio support to hugetlb specific flag macros Sidhartha Kumar
@ 2022-08-29 23:00 ` Sidhartha Kumar
  2022-08-30  3:36   ` Matthew Wilcox
  2022-09-01 17:32   ` Mike Kravetz
  2022-08-29 23:00 ` [PATCH 3/7] mm/hugetlb: add hugetlb_folio_subpool() helper Sidhartha Kumar
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 18+ messages in thread
From: Sidhartha Kumar @ 2022-08-29 23:00 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, vbabka, william.kucharski,
	dhowells, peterx, arnd, ccross, hughd, ebiederm, Sidhartha Kumar

Allows struct folio to store hugetlb metadata that is contained in the
private field of the first tail page.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 include/linux/mm_types.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 8a9ee9d24973..726c5304172c 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -144,6 +144,7 @@ struct page {
 #ifdef CONFIG_64BIT
 			unsigned int compound_nr; /* 1 << compound_order */
 #endif
+			unsigned long _private_1;
 		};
 		struct {	/* Second tail page of compound page */
 			unsigned long _compound_pad_1;	/* compound_head */
@@ -251,6 +252,7 @@ struct page {
  * @_total_mapcount: Do not use directly, call folio_entire_mapcount().
  * @_pincount: Do not use directly, call folio_maybe_dma_pinned().
  * @_folio_nr_pages: Do not use directly, call folio_nr_pages().
+ * @_private_1: Do not use directly, call folio_get_private_1().
  *
  * A folio is a physically, virtually and logically contiguous set
  * of bytes.  It is a power-of-two in size, and it is aligned to that
@@ -298,6 +300,8 @@ struct folio {
 #ifdef CONFIG_64BIT
 	unsigned int _folio_nr_pages;
 #endif
+	unsigned long _private_1;
+
 };
 
 #define FOLIO_MATCH(pg, fl)						\
@@ -325,6 +329,7 @@ FOLIO_MATCH(compound_mapcount, _total_mapcount);
 FOLIO_MATCH(compound_pincount, _pincount);
 #ifdef CONFIG_64BIT
 FOLIO_MATCH(compound_nr, _folio_nr_pages);
+FOLIO_MATCH(_private_1, _private_1);
 #endif
 #undef FOLIO_MATCH
 
@@ -370,6 +375,16 @@ static inline void *folio_get_private(struct folio *folio)
 	return folio->private;
 }
 
+static inline void folio_set_private_1(struct folio *folio, unsigned long private)
+{
+	folio->_private_1 = private;
+}
+
+static inline unsigned long folio_get_private_1(struct folio *folio)
+{
+	return folio->_private_1;
+}
+
 struct page_frag_cache {
 	void * va;
 #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/7] mm/hugetlb: add hugetlb_folio_subpool() helper
  2022-08-29 23:00 [PATCH 0/7] begin converting hugetlb code to folios Sidhartha Kumar
  2022-08-29 23:00 ` [PATCH 1/7] mm/hugetlb: add folio support to hugetlb specific flag macros Sidhartha Kumar
  2022-08-29 23:00 ` [PATCH 2/7] mm: add private field of first tail to struct page and struct folio Sidhartha Kumar
@ 2022-08-29 23:00 ` Sidhartha Kumar
  2022-08-29 23:00 ` [PATCH 4/7] mm/hugetlb: add hugetlb_set_folio_subpool() helper Sidhartha Kumar
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 18+ messages in thread
From: Sidhartha Kumar @ 2022-08-29 23:00 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, vbabka, william.kucharski,
	dhowells, peterx, arnd, ccross, hughd, ebiederm, Sidhartha Kumar

Allows hugetlbfs_migrate_folio to check subpool information by passing in a
folio.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 fs/hugetlbfs/inode.c    | 4 ++--
 include/linux/hugetlb.h | 7 ++++++-
 2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 9326693c4987..d1a6384f426e 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -970,9 +970,9 @@ static int hugetlbfs_migrate_folio(struct address_space *mapping,
 	if (rc != MIGRATEPAGE_SUCCESS)
 		return rc;
 
-	if (hugetlb_page_subpool(&src->page)) {
+	if (hugetlb_folio_subpool(src)) {
 		hugetlb_set_page_subpool(&dst->page,
-					hugetlb_page_subpool(&src->page));
+					hugetlb_folio_subpool(src));
 		hugetlb_set_page_subpool(&src->page, NULL);
 	}
 
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index c0a9bc9a6fa5..f6d5467c5ed8 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -709,12 +709,17 @@ extern unsigned int default_hstate_idx;
 
 #define default_hstate (hstates[default_hstate_idx])
 
+static inline struct hugepage_subpool *hugetlb_folio_subpool(struct folio *folio)
+{
+	return (void *)folio_get_private_1(folio);
+}
+
 /*
  * hugetlb page subpool pointer located in hpage[1].private
  */
 static inline struct hugepage_subpool *hugetlb_page_subpool(struct page *hpage)
 {
-	return (void *)page_private(hpage + SUBPAGE_INDEX_SUBPOOL);
+	return hugetlb_folio_subpool(page_folio(hpage));
 }
 
 static inline void hugetlb_set_page_subpool(struct page *hpage,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/7] mm/hugetlb: add hugetlb_set_folio_subpool() helper
  2022-08-29 23:00 [PATCH 0/7] begin converting hugetlb code to folios Sidhartha Kumar
                   ` (2 preceding siblings ...)
  2022-08-29 23:00 ` [PATCH 3/7] mm/hugetlb: add hugetlb_folio_subpool() helper Sidhartha Kumar
@ 2022-08-29 23:00 ` Sidhartha Kumar
  2022-08-29 23:00 ` [PATCH 5/7] mm/hugetlb: convert hugetlb_delete_from_page_cache() to use folios Sidhartha Kumar
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 18+ messages in thread
From: Sidhartha Kumar @ 2022-08-29 23:00 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, vbabka, william.kucharski,
	dhowells, peterx, arnd, ccross, hughd, ebiederm, Sidhartha Kumar

allows hugetlb subpool information to be set through a folio.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 fs/hugetlbfs/inode.c    | 4 ++--
 include/linux/hugetlb.h | 8 +++++++-
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index d1a6384f426e..3b5c941e49a7 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -971,9 +971,9 @@ static int hugetlbfs_migrate_folio(struct address_space *mapping,
 		return rc;
 
 	if (hugetlb_folio_subpool(src)) {
-		hugetlb_set_page_subpool(&dst->page,
+		hugetlb_set_folio_subpool(dst,
 					hugetlb_folio_subpool(src));
-		hugetlb_set_page_subpool(&src->page, NULL);
+		hugetlb_set_folio_subpool(src, NULL);
 	}
 
 	if (mode != MIGRATE_SYNC_NO_COPY)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index f6d5467c5ed8..d8742c5bf454 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -722,10 +722,16 @@ static inline struct hugepage_subpool *hugetlb_page_subpool(struct page *hpage)
 	return hugetlb_folio_subpool(page_folio(hpage));
 }
 
+static inline void hugetlb_set_folio_subpool(struct folio *folio,
+					struct hugepage_subpool *subpool)
+{
+	folio_set_private_1(folio, (unsigned long)subpool);
+}
+
 static inline void hugetlb_set_page_subpool(struct page *hpage,
 					struct hugepage_subpool *subpool)
 {
-	set_page_private(hpage + SUBPAGE_INDEX_SUBPOOL, (unsigned long)subpool);
+	hugetlb_set_folio_subpool(page_folio(hpage), subpool);
 }
 
 static inline struct hstate *hstate_file(struct file *f)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/7] mm/hugetlb: convert hugetlb_delete_from_page_cache() to use folios
  2022-08-29 23:00 [PATCH 0/7] begin converting hugetlb code to folios Sidhartha Kumar
                   ` (3 preceding siblings ...)
  2022-08-29 23:00 ` [PATCH 4/7] mm/hugetlb: add hugetlb_set_folio_subpool() helper Sidhartha Kumar
@ 2022-08-29 23:00 ` Sidhartha Kumar
  2022-08-30  3:26   ` Matthew Wilcox
  2022-08-29 23:00 ` [PATCH 6/7] mm/hugetlb add folio_hstate() Sidhartha Kumar
  2022-08-29 23:00 ` [PATCH 7/7] mm/migrate: use folio_hstate() in alloc_migration_target() Sidhartha Kumar
  6 siblings, 1 reply; 18+ messages in thread
From: Sidhartha Kumar @ 2022-08-29 23:00 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, vbabka, william.kucharski,
	dhowells, peterx, arnd, ccross, hughd, ebiederm, Sidhartha Kumar

Removes the last caller of remove_huge_page() by converting the code to its
folio equivalent.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 fs/hugetlbfs/inode.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 3b5c941e49a7..7ede356cc01e 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -366,9 +366,9 @@ static int hugetlbfs_write_end(struct file *file, struct address_space *mapping,
 
 static void hugetlb_delete_from_page_cache(struct page *page)
 {
-	ClearPageDirty(page);
-	ClearPageUptodate(page);
-	delete_from_page_cache(page);
+	folio_clear_dirty(folio);
+	folio_clear_uptodate(folio);
+	filemap_remove_folio(folio);
 }
 
 static void
@@ -486,15 +486,15 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
 
 			folio_lock(folio);
 			/*
-			 * We must free the huge page and remove from page
+			 * We must free the hugetlb folio and remove from page
 			 * cache BEFORE removing the * region/reserve map
 			 * (hugetlb_unreserve_pages).  In rare out of memory
 			 * conditions, removal of the region/reserve map could
 			 * fail. Correspondingly, the subpool and global
 			 * reserve usage count can need to be adjusted.
 			 */
-			VM_BUG_ON(HPageRestoreReserve(&folio->page));
-			hugetlb_delete_from_page_cache(&folio->page);
+			VM_BUG_ON_FOLIO(folio_test_hugetlb_restore_reserve(folio), folio);
+			hugetlb_delete_from_page_cache(folio);
 			freed++;
 			if (!truncate_op) {
 				if (unlikely(hugetlb_unreserve_pages(inode,
@@ -993,7 +993,7 @@ static int hugetlbfs_error_remove_page(struct address_space *mapping,
 	struct inode *inode = mapping->host;
 	pgoff_t index = page->index;
 
-	hugetlb_delete_from_page_cache(page);
+	hugetlb_delete_from_page_cache(page_folio(page));
 	if (unlikely(hugetlb_unreserve_pages(inode, index, index + 1, 1)))
 		hugetlb_fix_reserve_counts(inode);
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 6/7] mm/hugetlb add folio_hstate()
  2022-08-29 23:00 [PATCH 0/7] begin converting hugetlb code to folios Sidhartha Kumar
                   ` (4 preceding siblings ...)
  2022-08-29 23:00 ` [PATCH 5/7] mm/hugetlb: convert hugetlb_delete_from_page_cache() to use folios Sidhartha Kumar
@ 2022-08-29 23:00 ` Sidhartha Kumar
  2022-09-01 18:34   ` Mike Kravetz
  2022-08-29 23:00 ` [PATCH 7/7] mm/migrate: use folio_hstate() in alloc_migration_target() Sidhartha Kumar
  6 siblings, 1 reply; 18+ messages in thread
From: Sidhartha Kumar @ 2022-08-29 23:00 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, vbabka, william.kucharski,
	dhowells, peterx, arnd, ccross, hughd, ebiederm, Sidhartha Kumar

Helper function to retrieve hstate information from a hugetlb folio.


Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 include/linux/hugetlb.h | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index d8742c5bf454..093b5d32d6b5 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -815,10 +815,15 @@ static inline pte_t arch_make_huge_pte(pte_t entry, unsigned int shift,
 }
 #endif
 
+static inline struct hstate *folio_hstate(struct folio *folio)
+{
+	VM_BUG_ON_FOLIO(!folio_test_hugetlb(folio), folio);
+	return size_to_hstate(folio_size(folio));
+}
+
 static inline struct hstate *page_hstate(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageHuge(page), page);
-	return size_to_hstate(page_size(page));
+	return folio_hstate(page_folio(page));
 }
 
 static inline unsigned hstate_index_to_shift(unsigned index)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 7/7] mm/migrate: use folio_hstate() in alloc_migration_target()
  2022-08-29 23:00 [PATCH 0/7] begin converting hugetlb code to folios Sidhartha Kumar
                   ` (5 preceding siblings ...)
  2022-08-29 23:00 ` [PATCH 6/7] mm/hugetlb add folio_hstate() Sidhartha Kumar
@ 2022-08-29 23:00 ` Sidhartha Kumar
  6 siblings, 0 replies; 18+ messages in thread
From: Sidhartha Kumar @ 2022-08-29 23:00 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, songmuchun, mike.kravetz, willy, vbabka, william.kucharski,
	dhowells, peterx, arnd, ccross, hughd, ebiederm, Sidhartha Kumar

Allows alloc_migration_target to pass in a folio to get hstate information.

Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
 mm/migrate.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 6a1597c92261..55392a706493 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1589,7 +1589,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
 		nid = folio_nid(folio);
 
 	if (folio_test_hugetlb(folio)) {
-		struct hstate *h = page_hstate(&folio->page);
+		struct hstate *h = folio_hstate(folio);
 
 		gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
 		return alloc_huge_page_nodemask(h, nid, mtc->nmask, gfp_mask);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/7] mm/hugetlb: convert hugetlb_delete_from_page_cache() to use folios
  2022-08-29 23:00 ` [PATCH 5/7] mm/hugetlb: convert hugetlb_delete_from_page_cache() to use folios Sidhartha Kumar
@ 2022-08-30  3:26   ` Matthew Wilcox
  2022-08-30 16:47     ` Sidhartha Kumar
  0 siblings, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2022-08-30  3:26 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, mike.kravetz, vbabka,
	william.kucharski, dhowells, peterx, arnd, ccross, hughd,
	ebiederm

On Mon, Aug 29, 2022 at 04:00:12PM -0700, Sidhartha Kumar wrote:
>  static void hugetlb_delete_from_page_cache(struct page *page)
>  {
> -	ClearPageDirty(page);
> -	ClearPageUptodate(page);
> -	delete_from_page_cache(page);
> +	folio_clear_dirty(folio);
> +	folio_clear_uptodate(folio);
> +	filemap_remove_folio(folio);
>  }

Did you send the right version of this patch?  It doesn't look like it'll
compile.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/7] mm/hugetlb: add folio support to hugetlb specific flag macros
  2022-08-29 23:00 ` [PATCH 1/7] mm/hugetlb: add folio support to hugetlb specific flag macros Sidhartha Kumar
@ 2022-08-30  3:33   ` Matthew Wilcox
  2022-08-30 18:09     ` Sidhartha Kumar
  0 siblings, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2022-08-30  3:33 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, mike.kravetz, vbabka,
	william.kucharski, dhowells, peterx, arnd, ccross, hughd,
	ebiederm

On Mon, Aug 29, 2022 at 04:00:08PM -0700, Sidhartha Kumar wrote:
>  #define TESTHPAGEFLAG(uname, flname)				\
> +static __always_inline								\
> +int folio_test_hugetlb_##flname(struct folio *folio)		\

One change I made was to have folio_test_foo() return bool instead of
int.  It helps the compiler really understand what's going on.  Maybe
some humans too ;-)

> +	{	void **private = &folio->private;		\
> +		return test_bit(HPG_##flname, (void *)((unsigned long)private));	\

I've made this tricky for you by making folio->private a void * instead
of the unsigned long in page.  Would this look better as ...

	{							\
		void *private = &folio->private;		\
		return test_bit(HPG_##flname, private);		\

perhaps?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/7] mm: add private field of first tail to struct page and struct folio
  2022-08-29 23:00 ` [PATCH 2/7] mm: add private field of first tail to struct page and struct folio Sidhartha Kumar
@ 2022-08-30  3:36   ` Matthew Wilcox
  2022-09-01 17:32   ` Mike Kravetz
  1 sibling, 0 replies; 18+ messages in thread
From: Matthew Wilcox @ 2022-08-30  3:36 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, mike.kravetz, vbabka,
	william.kucharski, dhowells, peterx, arnd, ccross, hughd,
	ebiederm

On Mon, Aug 29, 2022 at 04:00:09PM -0700, Sidhartha Kumar wrote:
> +++ b/include/linux/mm_types.h
> @@ -144,6 +144,7 @@ struct page {
>  #ifdef CONFIG_64BIT
>  			unsigned int compound_nr; /* 1 << compound_order */
>  #endif
> +			unsigned long _private_1;
>  		};
>  		struct {	/* Second tail page of compound page */
>  			unsigned long _compound_pad_1;	/* compound_head */

Have you tested compiling this on 32-bit?  I think you need to move
the _private_1 inside the ifdef CONFIG_64BIT.

> @@ -251,6 +252,7 @@ struct page {
>   * @_total_mapcount: Do not use directly, call folio_entire_mapcount().
>   * @_pincount: Do not use directly, call folio_maybe_dma_pinned().
>   * @_folio_nr_pages: Do not use directly, call folio_nr_pages().
> + * @_private_1: Do not use directly, call folio_get_private_1().
>   *
>   * A folio is a physically, virtually and logically contiguous set
>   * of bytes.  It is a power-of-two in size, and it is aligned to that
> @@ -298,6 +300,8 @@ struct folio {
>  #ifdef CONFIG_64BIT
>  	unsigned int _folio_nr_pages;
>  #endif
> +	unsigned long _private_1;

(but don't do that here!)

The intent is that _private_1 lines up with head[1].private on 32-bit.
It's a bit tricky, and I'm not sure that I'm thinking about it quite right.

>  };
>  
>  #define FOLIO_MATCH(pg, fl)						\
> @@ -325,6 +329,7 @@ FOLIO_MATCH(compound_mapcount, _total_mapcount);
>  FOLIO_MATCH(compound_pincount, _pincount);
>  #ifdef CONFIG_64BIT
>  FOLIO_MATCH(compound_nr, _folio_nr_pages);
> +FOLIO_MATCH(_private_1, _private_1);
>  #endif
>  #undef FOLIO_MATCH
>  
> @@ -370,6 +375,16 @@ static inline void *folio_get_private(struct folio *folio)
>  	return folio->private;
>  }
>  
> +static inline void folio_set_private_1(struct folio *folio, unsigned long private)
> +{
> +	folio->_private_1 = private;
> +}
> +
> +static inline unsigned long folio_get_private_1(struct folio *folio)
> +{
> +	return folio->_private_1;
> +}
> +
>  struct page_frag_cache {
>  	void * va;
>  #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
> -- 
> 2.31.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/7] mm/hugetlb: convert hugetlb_delete_from_page_cache() to use folios
  2022-08-30  3:26   ` Matthew Wilcox
@ 2022-08-30 16:47     ` Sidhartha Kumar
  0 siblings, 0 replies; 18+ messages in thread
From: Sidhartha Kumar @ 2022-08-30 16:47 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-kernel, linux-mm, akpm, songmuchun, mike.kravetz, vbabka,
	william.kucharski, dhowells, peterx, arnd, ccross, hughd,
	ebiederm



On 8/29/22 8:26 PM, Matthew Wilcox wrote:
> On Mon, Aug 29, 2022 at 04:00:12PM -0700, Sidhartha Kumar wrote:
>>   static void hugetlb_delete_from_page_cache(struct page *page)
>>   {
>> -	ClearPageDirty(page);
>> -	ClearPageUptodate(page);
>> -	delete_from_page_cache(page);
>> +	folio_clear_dirty(folio);
>> +	folio_clear_uptodate(folio);
>> +	filemap_remove_folio(folio);
>>   }
> Did you send the right version of this patch?  It doesn't look like it'll
> compile.
I missed changing the function argument to struct folio while rebasing 
onto Mike's patch,
will fix in a v2.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/7] mm/hugetlb: add folio support to hugetlb specific flag macros
  2022-08-30  3:33   ` Matthew Wilcox
@ 2022-08-30 18:09     ` Sidhartha Kumar
  2022-09-01 16:55       ` Mike Kravetz
  0 siblings, 1 reply; 18+ messages in thread
From: Sidhartha Kumar @ 2022-08-30 18:09 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-kernel, linux-mm, akpm, songmuchun, mike.kravetz, vbabka,
	william.kucharski, dhowells, peterx, arnd, ccross, hughd,
	ebiederm



On 8/29/22 8:33 PM, Matthew Wilcox wrote:
> On Mon, Aug 29, 2022 at 04:00:08PM -0700, Sidhartha Kumar wrote:
>>   #define TESTHPAGEFLAG(uname, flname)				\
>> +static __always_inline								\
>> +int folio_test_hugetlb_##flname(struct folio *folio)		\
> One change I made was to have folio_test_foo() return bool instead of
> int.  It helps the compiler really understand what's going on.  Maybe
> some humans too ;-)
>

I went with returning an int to stay consistent with the page version
of the macros which return an int. I'm fine with changing it to return
a bool.

>> +	{	void **private = &folio->private;		\
>> +		return test_bit(HPG_##flname, (void *)((unsigned long)private));	\
> I've made this tricky for you by making folio->private a void * instead
> of the unsigned long in page.  Would this look better as ...
>
> 	{							\
> 		void *private = &folio->private;		\
> 		return test_bit(HPG_##flname, private);		\
>
> perhaps?

Ya this looks much better and passes the tests, will add to v2.




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/7] mm/hugetlb: add folio support to hugetlb specific flag macros
  2022-08-30 18:09     ` Sidhartha Kumar
@ 2022-09-01 16:55       ` Mike Kravetz
  0 siblings, 0 replies; 18+ messages in thread
From: Mike Kravetz @ 2022-09-01 16:55 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: Matthew Wilcox, linux-kernel, linux-mm, akpm, songmuchun, vbabka,
	william.kucharski, dhowells, peterx, arnd, ccross, hughd,
	ebiederm

On 08/30/22 11:09, Sidhartha Kumar wrote:
> 
> 
> On 8/29/22 8:33 PM, Matthew Wilcox wrote:
> > On Mon, Aug 29, 2022 at 04:00:08PM -0700, Sidhartha Kumar wrote:
> > >   #define TESTHPAGEFLAG(uname, flname)				\
> > > +static __always_inline								\
> > > +int folio_test_hugetlb_##flname(struct folio *folio)		\
> > One change I made was to have folio_test_foo() return bool instead of
> > int.  It helps the compiler really understand what's going on.  Maybe
> > some humans too ;-)
> > 
> 
> I went with returning an int to stay consistent with the page version
> of the macros which return an int. I'm fine with changing it to return
> a bool.

I believe the page test macros returned an int when I added the hugetlb
specific versions.  So, I just did the same.  Since they are now bool,
it makes sense to have these be consistent.
-- 
Mike Kravetz

> 
> > > +	{	void **private = &folio->private;		\
> > > +		return test_bit(HPG_##flname, (void *)((unsigned long)private));	\
> > I've made this tricky for you by making folio->private a void * instead
> > of the unsigned long in page.  Would this look better as ...
> > 
> > 	{							\
> > 		void *private = &folio->private;		\
> > 		return test_bit(HPG_##flname, private);		\
> > 
> > perhaps?
> 
> Ya this looks much better and passes the tests, will add to v2.
> 
> 
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/7] mm: add private field of first tail to struct page and struct folio
  2022-08-29 23:00 ` [PATCH 2/7] mm: add private field of first tail to struct page and struct folio Sidhartha Kumar
  2022-08-30  3:36   ` Matthew Wilcox
@ 2022-09-01 17:32   ` Mike Kravetz
  2022-09-01 18:32     ` Matthew Wilcox
  1 sibling, 1 reply; 18+ messages in thread
From: Mike Kravetz @ 2022-09-01 17:32 UTC (permalink / raw)
  To: Sidhartha Kumar, willy
  Cc: linux-kernel, linux-mm, akpm, songmuchun, vbabka,
	william.kucharski, dhowells, peterx, arnd, ccross, hughd,
	ebiederm

On 08/29/22 16:00, Sidhartha Kumar wrote:
> Allows struct folio to store hugetlb metadata that is contained in the
> private field of the first tail page.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  include/linux/mm_types.h | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 8a9ee9d24973..726c5304172c 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -144,6 +144,7 @@ struct page {
>  #ifdef CONFIG_64BIT
>  			unsigned int compound_nr; /* 1 << compound_order */
>  #endif
> +			unsigned long _private_1;
>  		};
>  		struct {	/* Second tail page of compound page */
>  			unsigned long _compound_pad_1;	/* compound_head */
> @@ -251,6 +252,7 @@ struct page {
>   * @_total_mapcount: Do not use directly, call folio_entire_mapcount().
>   * @_pincount: Do not use directly, call folio_maybe_dma_pinned().
>   * @_folio_nr_pages: Do not use directly, call folio_nr_pages().
> + * @_private_1: Do not use directly, call folio_get_private_1().
>   *
>   * A folio is a physically, virtually and logically contiguous set
>   * of bytes.  It is a power-of-two in size, and it is aligned to that

Not really an issue with this patch, but it made me read more of this
comment about folios.  It goes on to say ...

 * same power-of-two.  It is at least as large as %PAGE_SIZE.  If it is
 * in the page cache, it is at a file offset which is a multiple of that
 * power-of-two.  It may be mapped into userspace at an address which is
 * at an arbitrary page offset, but its kernel virtual address is aligned
 * to its size.
 */

This series is to begin converting hugetlb code to folios.  Just want to
note that 'hugetlb folios' have specific user space alignment restrictions.
So, I do not think the comment about arbitrary page offset would apply to
hugetlb.

Matthew, should we note that hugetlb is special in the comment?  Or, is it
not worth updating?

Also, folio_get_private_1 will be used for the hugetlb subpool pointer
which resides in page[1].private.  This is used in the next patch of
this series.  I'm sure you are aware that hugetlb also uses page private
in sub pages 2 and 3.  Can/will/should this method of accessing private
in sub pages be expanded to cover these as well?  Expansion can happen
later, but if this can not be expanded perhaps we should come up with
another scheme.
-- 
Mike Kravetz



> @@ -298,6 +300,8 @@ struct folio {
>  #ifdef CONFIG_64BIT
>  	unsigned int _folio_nr_pages;
>  #endif
> +	unsigned long _private_1;
> +
>  };
>  
>  #define FOLIO_MATCH(pg, fl)						\
> @@ -325,6 +329,7 @@ FOLIO_MATCH(compound_mapcount, _total_mapcount);
>  FOLIO_MATCH(compound_pincount, _pincount);
>  #ifdef CONFIG_64BIT
>  FOLIO_MATCH(compound_nr, _folio_nr_pages);
> +FOLIO_MATCH(_private_1, _private_1);
>  #endif
>  #undef FOLIO_MATCH
>  
> @@ -370,6 +375,16 @@ static inline void *folio_get_private(struct folio *folio)
>  	return folio->private;
>  }
>  
> +static inline void folio_set_private_1(struct folio *folio, unsigned long private)
> +{
> +	folio->_private_1 = private;
> +}
> +
> +static inline unsigned long folio_get_private_1(struct folio *folio)
> +{
> +	return folio->_private_1;
> +}
> +
>  struct page_frag_cache {
>  	void * va;
>  #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
> -- 
> 2.31.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/7] mm: add private field of first tail to struct page and struct folio
  2022-09-01 17:32   ` Mike Kravetz
@ 2022-09-01 18:32     ` Matthew Wilcox
  2022-09-01 20:29       ` Mike Kravetz
  0 siblings, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2022-09-01 18:32 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Sidhartha Kumar, linux-kernel, linux-mm, akpm, songmuchun,
	vbabka, william.kucharski, dhowells, peterx, arnd, ccross, hughd,
	ebiederm

On Thu, Sep 01, 2022 at 10:32:43AM -0700, Mike Kravetz wrote:
> Not really an issue with this patch, but it made me read more of this
> comment about folios.  It goes on to say ...
> 
>  * same power-of-two.  It is at least as large as %PAGE_SIZE.  If it is
>  * in the page cache, it is at a file offset which is a multiple of that
>  * power-of-two.  It may be mapped into userspace at an address which is
>  * at an arbitrary page offset, but its kernel virtual address is aligned
>  * to its size.
>  */
> 
> This series is to begin converting hugetlb code to folios.  Just want to
> note that 'hugetlb folios' have specific user space alignment restrictions.
> So, I do not think the comment about arbitrary page offset would apply to
> hugetlb.
> 
> Matthew, should we note that hugetlb is special in the comment?  Or, is it
> not worth updating?

I'm open to updating it if we can find good wording.  What I'm trying
to get across there is that when dealing with folios, you can assume
that they're naturally aligned physically, logically (in the file) and
virtually (kernel address), but not necessarily virtually (user
address).  Hugetlb folios are special in that they are guaranteed to
be virtually aligned in user space, but I don't know if here is the
right place to document that.  It's an additional restriction, so code
which handles generic folios doesn't need to know it.

> Also, folio_get_private_1 will be used for the hugetlb subpool pointer
> which resides in page[1].private.  This is used in the next patch of
> this series.  I'm sure you are aware that hugetlb also uses page private
> in sub pages 2 and 3.  Can/will/should this method of accessing private
> in sub pages be expanded to cover these as well?  Expansion can happen
> later, but if this can not be expanded perhaps we should come up with
> another scheme.

There's a few ways of tackling this.  What I'm currently thinking is
that we change how hugetlbfs uses struct page to store its extra data.
It would end up looking something like this (in struct page):

+++ b/include/linux/mm_types.h
@@ -147,9 +147,10 @@ struct page {
                };
                struct {        /* Second tail page of compound page */
                        unsigned long _compound_pad_1;  /* compound_head */
-                       unsigned long _compound_pad_2;
                        /* For both global and memcg */
                        struct list_head deferred_list;
+                       unsigned long hugetlbfs_private_2;
+                       unsigned long hugetlbfs_private_3;
                };
                struct {        /* Page table pages */
                        unsigned long _pt_pad_1;        /* compound_head */

although we could use better names and/or types?  I haven't looked to
see what you're storing here yet.  And then we can make the
corresponding change to struct folio to add these elements at the
right place.

Does that sound sensible?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/7] mm/hugetlb add folio_hstate()
  2022-08-29 23:00 ` [PATCH 6/7] mm/hugetlb add folio_hstate() Sidhartha Kumar
@ 2022-09-01 18:34   ` Mike Kravetz
  0 siblings, 0 replies; 18+ messages in thread
From: Mike Kravetz @ 2022-09-01 18:34 UTC (permalink / raw)
  To: Sidhartha Kumar
  Cc: linux-kernel, linux-mm, akpm, songmuchun, willy, vbabka,
	william.kucharski, dhowells, peterx, arnd, ccross, hughd,
	ebiederm

On 08/29/22 16:00, Sidhartha Kumar wrote:
> Helper function to retrieve hstate information from a hugetlb folio.
> 
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
>  include/linux/hugetlb.h | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index d8742c5bf454..093b5d32d6b5 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -815,10 +815,15 @@ static inline pte_t arch_make_huge_pte(pte_t entry, unsigned int shift,
>  }
>  #endif
>  
> +static inline struct hstate *folio_hstate(struct folio *folio)
> +{
> +	VM_BUG_ON_FOLIO(!folio_test_hugetlb(folio), folio);
> +	return size_to_hstate(folio_size(folio));
> +}
> +
>  static inline struct hstate *page_hstate(struct page *page)
>  {
> -	VM_BUG_ON_PAGE(!PageHuge(page), page);
> -	return size_to_hstate(page_size(page));
> +	return folio_hstate(page_folio(page));
>  }
>  
>  static inline unsigned hstate_index_to_shift(unsigned index)
> -- 
> 2.31.1
> 

I would suggest including patch 7 which makes use of folio_hstate.

-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/7] mm: add private field of first tail to struct page and struct folio
  2022-09-01 18:32     ` Matthew Wilcox
@ 2022-09-01 20:29       ` Mike Kravetz
  0 siblings, 0 replies; 18+ messages in thread
From: Mike Kravetz @ 2022-09-01 20:29 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Sidhartha Kumar, linux-kernel, linux-mm, akpm, songmuchun,
	vbabka, william.kucharski, dhowells, peterx, arnd, ccross, hughd,
	ebiederm

On 09/01/22 19:32, Matthew Wilcox wrote:
> On Thu, Sep 01, 2022 at 10:32:43AM -0700, Mike Kravetz wrote:
> > Not really an issue with this patch, but it made me read more of this
> > comment about folios.  It goes on to say ...
> > 
> >  * same power-of-two.  It is at least as large as %PAGE_SIZE.  If it is
> >  * in the page cache, it is at a file offset which is a multiple of that
> >  * power-of-two.  It may be mapped into userspace at an address which is
> >  * at an arbitrary page offset, but its kernel virtual address is aligned
> >  * to its size.
> >  */
> > 
> > This series is to begin converting hugetlb code to folios.  Just want to
> > note that 'hugetlb folios' have specific user space alignment restrictions.
> > So, I do not think the comment about arbitrary page offset would apply to
> > hugetlb.
> > 
> > Matthew, should we note that hugetlb is special in the comment?  Or, is it
> > not worth updating?
> 
> I'm open to updating it if we can find good wording.  What I'm trying
> to get across there is that when dealing with folios, you can assume
> that they're naturally aligned physically, logically (in the file) and
> virtually (kernel address), but not necessarily virtually (user
> address).  Hugetlb folios are special in that they are guaranteed to
> be virtually aligned in user space, but I don't know if here is the
> right place to document that.  It's an additional restriction, so code
> which handles generic folios doesn't need to know it.

Fair enough.  No need to change.  It just caught my eye.

> > Also, folio_get_private_1 will be used for the hugetlb subpool pointer
> > which resides in page[1].private.  This is used in the next patch of
> > this series.  I'm sure you are aware that hugetlb also uses page private
> > in sub pages 2 and 3.  Can/will/should this method of accessing private
> > in sub pages be expanded to cover these as well?  Expansion can happen
> > later, but if this can not be expanded perhaps we should come up with
> > another scheme.
> 
> There's a few ways of tackling this.  What I'm currently thinking is
> that we change how hugetlbfs uses struct page to store its extra data.
> It would end up looking something like this (in struct page):
> 
> +++ b/include/linux/mm_types.h
> @@ -147,9 +147,10 @@ struct page {
>                 };
>                 struct {        /* Second tail page of compound page */
>                         unsigned long _compound_pad_1;  /* compound_head */
> -                       unsigned long _compound_pad_2;
>                         /* For both global and memcg */
>                         struct list_head deferred_list;
> +                       unsigned long hugetlbfs_private_2;
> +                       unsigned long hugetlbfs_private_3;
>                 };
>                 struct {        /* Page table pages */
>                         unsigned long _pt_pad_1;        /* compound_head */
> 
> although we could use better names and/or types?  I haven't looked to
> see what you're storing here yet.  And then we can make the
> corresponding change to struct folio to add these elements at the
> right place.

I am terrible at names.  hugetlb is storing pointers in the private fields.
FWICT, something like this would work.

> 
> Does that sound sensible?

-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-09-01 20:29 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-29 23:00 [PATCH 0/7] begin converting hugetlb code to folios Sidhartha Kumar
2022-08-29 23:00 ` [PATCH 1/7] mm/hugetlb: add folio support to hugetlb specific flag macros Sidhartha Kumar
2022-08-30  3:33   ` Matthew Wilcox
2022-08-30 18:09     ` Sidhartha Kumar
2022-09-01 16:55       ` Mike Kravetz
2022-08-29 23:00 ` [PATCH 2/7] mm: add private field of first tail to struct page and struct folio Sidhartha Kumar
2022-08-30  3:36   ` Matthew Wilcox
2022-09-01 17:32   ` Mike Kravetz
2022-09-01 18:32     ` Matthew Wilcox
2022-09-01 20:29       ` Mike Kravetz
2022-08-29 23:00 ` [PATCH 3/7] mm/hugetlb: add hugetlb_folio_subpool() helper Sidhartha Kumar
2022-08-29 23:00 ` [PATCH 4/7] mm/hugetlb: add hugetlb_set_folio_subpool() helper Sidhartha Kumar
2022-08-29 23:00 ` [PATCH 5/7] mm/hugetlb: convert hugetlb_delete_from_page_cache() to use folios Sidhartha Kumar
2022-08-30  3:26   ` Matthew Wilcox
2022-08-30 16:47     ` Sidhartha Kumar
2022-08-29 23:00 ` [PATCH 6/7] mm/hugetlb add folio_hstate() Sidhartha Kumar
2022-09-01 18:34   ` Mike Kravetz
2022-08-29 23:00 ` [PATCH 7/7] mm/migrate: use folio_hstate() in alloc_migration_target() Sidhartha Kumar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).