linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] create hugetlb flags to consolidate state
@ 2021-01-16  0:31 Mike Kravetz
  2021-01-16  0:31 ` [PATCH 1/5] hugetlb: use page.private for hugetlb specific page flags Mike Kravetz
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Mike Kravetz @ 2021-01-16  0:31 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Michal Hocko, Naoya Horiguchi, Muchun Song, David Hildenbrand,
	Oscar Salvador, Matthew Wilcox, Andrew Morton, Mike Kravetz

While discussing a series of hugetlb fixes in [1], it became evident
that the hugetlb specific page state information is stored in a somewhat
haphazard manner.  Code dealing with state information would be easier
to read, understand and maintain if this information was stored in a
consistent manner.

This series uses page.private of the hugetlb head page for storing a
set of hugetlb specific page flags.  Routines are priovided for test,
set and clear of the flags.

[1] https://lore.kernel.org/r/20210106084739.63318-1-songmuchun@bytedance.com

RFC -> PATCH
  Simplified to use a single set of flag manipulation routines (Oscar)
  Moved flags and routines to hugetlb.h (Muchun)
  Changed format of page flag names (Muchun)
  Changed subpool routine names (Matthew)
  More comments in code (Oscar)

Based on v5.11-rc3-mmotm-2021-01-12-01-57

Mike Kravetz (5):
  hugetlb: use page.private for hugetlb specific page flags
  hugetlb: convert page_huge_active() to HP_Migratable flag
  hugetlb: only set HP_Migratable for migratable hstates
  hugetlb: convert PageHugeTemporary() to HP_Temporary flag
  hugetlb: convert PageHugeFreed to HP_Freed flag

 fs/hugetlbfs/inode.c       |  14 +---
 include/linux/hugetlb.h    |  81 ++++++++++++++++++++
 include/linux/page-flags.h |   6 --
 mm/hugetlb.c               | 150 +++++++++++--------------------------
 mm/memory_hotplug.c        |   8 +-
 mm/migrate.c               |  12 ---
 6 files changed, 137 insertions(+), 134 deletions(-)

-- 
2.29.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/5] hugetlb: use page.private for hugetlb specific page flags
  2021-01-16  0:31 [PATCH 0/5] create hugetlb flags to consolidate state Mike Kravetz
@ 2021-01-16  0:31 ` Mike Kravetz
  2021-01-16  0:31 ` [PATCH 2/5] hugetlb: convert page_huge_active() to HP_Migratable flag Mike Kravetz
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Mike Kravetz @ 2021-01-16  0:31 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Michal Hocko, Naoya Horiguchi, Muchun Song, David Hildenbrand,
	Oscar Salvador, Matthew Wilcox, Andrew Morton, Mike Kravetz

As hugetlbfs evolved, state information about hugetlb pages was added.
One 'convenient' way of doing this was to use available fields in tail
pages.  Over time, it has become difficult to know the meaning or contents
of fields simply by looking at a small bit of code.  Sometimes, the
naming is just confusing.  For example: The PagePrivate flag indicates
a huge page reservation was consumed and needs to be restored if an error
is encountered and the page is freed before it is instantiated.  The
page.private field contains the pointer to a subpool if the page is
associated with one.

In an effort to make the code more readable, use page.private to contain
hugetlb specific flags.  A set of hugetlb_*_page_flag() routines are
created for flag manipulation.  More importantly, an enum of flag values
will be created with names that actually reflect their purpose.

In this patch,
- Create infrastructure for hugetlb_*_page_flag functions
- Move subpool pointer to page[1].private to make way for flags
  Create routines with meaningful names to modify subpool field
- Use new HP_Restore_Reserve flag instead of PagePrivate

Conversion of other state information will happen in subsequent patches.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 fs/hugetlbfs/inode.c    | 12 ++------
 include/linux/hugetlb.h | 61 +++++++++++++++++++++++++++++++++++++++++
 mm/hugetlb.c            | 46 +++++++++++++++----------------
 3 files changed, 87 insertions(+), 32 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 740693d7f255..b8a661780c4a 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -955,15 +955,9 @@ static int hugetlbfs_migrate_page(struct address_space *mapping,
 	if (rc != MIGRATEPAGE_SUCCESS)
 		return rc;
 
-	/*
-	 * page_private is subpool pointer in hugetlb pages.  Transfer to
-	 * new page.  PagePrivate is not associated with page_private for
-	 * hugetlb pages and can not be set here as only page_huge_active
-	 * pages can be migrated.
-	 */
-	if (page_private(page)) {
-		set_page_private(newpage, page_private(page));
-		set_page_private(page, 0);
+	if (hugetlb_page_subpool(page)) {
+		hugetlb_set_page_subpool(newpage, hugetlb_page_subpool(page));
+		hugetlb_set_page_subpool(page, NULL);
 	}
 
 	if (mode != MIGRATE_SYNC_NO_COPY)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index ef5b144b8aac..64f8c7a64186 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -472,6 +472,19 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
 					unsigned long flags);
 #endif /* HAVE_ARCH_HUGETLB_UNMAPPED_AREA */
 
+/*
+ * huegtlb page specific state flags.  These flags are located in page.private
+ * of the hugetlb head page.  The hugetlb_*_page_flag() routines should be used
+ * to manipulate these flags.
+ *
+ * HP_Restore_Reserve - Set when a hugetlb page consumes a reservation at
+ *	allocation time.  Cleared when page is fully instantiated.  Free
+ *	routine checks flag to restore a reservation on error paths.
+ */
+enum hugetlb_page_flags {
+	HP_Restore_Reserve = 0,
+};
+
 #ifdef CONFIG_HUGETLB_PAGE
 
 #define HSTATE_NAME_LEN 32
@@ -531,6 +544,38 @@ extern unsigned int default_hstate_idx;
 
 #define default_hstate (hstates[default_hstate_idx])
 
+static inline int hugetlb_test_page_flag(struct page *page,
+					enum hugetlb_page_flags hp_flag)
+{
+	return test_bit(hp_flag, &page->private);
+}
+
+static inline void hugetlb_set_page_flag(struct page *page,
+					enum hugetlb_page_flags hp_flag)
+{
+	return set_bit(hp_flag, &page->private);
+}
+
+static inline void hugetlb_clear_page_flag(struct page *page,
+					enum hugetlb_page_flags hp_flag)
+{
+	return clear_bit(hp_flag, &page->private);
+}
+
+/*
+ * hugetlb page subpool pointer located in hpage[1].private
+ */
+static inline struct hugepage_subpool *hugetlb_page_subpool(struct page *hpage)
+{
+	return (struct hugepage_subpool *)(hpage+1)->private;
+}
+
+static inline void hugetlb_set_page_subpool(struct page *hpage,
+					struct hugepage_subpool *subpool)
+{
+	set_page_private(hpage+1, (unsigned long)subpool);
+}
+
 static inline struct hstate *hstate_file(struct file *f)
 {
 	return hstate_inode(file_inode(f));
@@ -775,6 +820,22 @@ void set_page_huge_active(struct page *page);
 #else	/* CONFIG_HUGETLB_PAGE */
 struct hstate {};
 
+static inline int hugetlb_test_page_flag(struct page *page,
+					enum hugetlb_page_flags hp_flag)
+{
+	return 0;
+}
+
+static inline void hugetlb_set_page_flag(struct page *page,
+					enum hugetlb_page_flags hp_flag)
+{
+}
+
+static inline void hugetlb_clear_page_flag(struct page *page,
+					enum hugetlb_page_flags hp_flag)
+{
+}
+
 static inline struct page *alloc_huge_page(struct vm_area_struct *vma,
 					   unsigned long addr,
 					   int avoid_reserve)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 737b2dce19e6..b01002d8fc2b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1133,7 +1133,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h,
 	nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
 	page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask);
 	if (page && !avoid_reserve && vma_has_reserves(vma, chg)) {
-		SetPagePrivate(page);
+		hugetlb_set_page_flag(page, HP_Restore_Reserve);
 		h->resv_huge_pages--;
 	}
 
@@ -1407,20 +1407,19 @@ static void __free_huge_page(struct page *page)
 	 */
 	struct hstate *h = page_hstate(page);
 	int nid = page_to_nid(page);
-	struct hugepage_subpool *spool =
-		(struct hugepage_subpool *)page_private(page);
+	struct hugepage_subpool *spool = hugetlb_page_subpool(page);
 	bool restore_reserve;
 
 	VM_BUG_ON_PAGE(page_count(page), page);
 	VM_BUG_ON_PAGE(page_mapcount(page), page);
 
-	set_page_private(page, 0);
+	hugetlb_set_page_subpool(page, NULL);
 	page->mapping = NULL;
-	restore_reserve = PagePrivate(page);
-	ClearPagePrivate(page);
+	restore_reserve = hugetlb_test_page_flag(page, HP_Restore_Reserve);
+	hugetlb_clear_page_flag(page, HP_Restore_Reserve);
 
 	/*
-	 * If PagePrivate() was set on page, page allocation consumed a
+	 * If HP_Restore_Reserve was set on page, page allocation consumed a
 	 * reservation.  If the page was associated with a subpool, there
 	 * would have been a page reserved in the subpool before allocation
 	 * via hugepage_subpool_get_pages().  Since we are 'restoring' the
@@ -2254,24 +2253,24 @@ static long vma_add_reservation(struct hstate *h,
  * This routine is called to restore a reservation on error paths.  In the
  * specific error paths, a huge page was allocated (via alloc_huge_page)
  * and is about to be freed.  If a reservation for the page existed,
- * alloc_huge_page would have consumed the reservation and set PagePrivate
- * in the newly allocated page.  When the page is freed via free_huge_page,
- * the global reservation count will be incremented if PagePrivate is set.
- * However, free_huge_page can not adjust the reserve map.  Adjust the
- * reserve map here to be consistent with global reserve count adjustments
- * to be made by free_huge_page.
+ * alloc_huge_page would have consumed the reservation and set
+ * HP_Restore_Reserve in the newly allocated page.  When the page is freed
+ * via free_huge_page, the global reservation count will be incremented if
+ * HP_Restore_Reserve is set.  However, free_huge_page can not adjust the
+ * reserve map.  Adjust the reserve map here to be consistent with global
+ * reserve count adjustments to be made by free_huge_page.
  */
 static void restore_reserve_on_error(struct hstate *h,
 			struct vm_area_struct *vma, unsigned long address,
 			struct page *page)
 {
-	if (unlikely(PagePrivate(page))) {
+	if (unlikely(hugetlb_test_page_flag(page, HP_Restore_Reserve))) {
 		long rc = vma_needs_reservation(h, vma, address);
 
 		if (unlikely(rc < 0)) {
 			/*
 			 * Rare out of memory condition in reserve map
-			 * manipulation.  Clear PagePrivate so that
+			 * manipulation.  Clear HP_Restore_Reserve so that
 			 * global reserve count will not be incremented
 			 * by free_huge_page.  This will make it appear
 			 * as though the reservation for this page was
@@ -2280,7 +2279,7 @@ static void restore_reserve_on_error(struct hstate *h,
 			 * is better than inconsistent global huge page
 			 * accounting of reserve counts.
 			 */
-			ClearPagePrivate(page);
+			hugetlb_clear_page_flag(page, HP_Restore_Reserve);
 		} else if (rc) {
 			rc = vma_add_reservation(h, vma, address);
 			if (unlikely(rc < 0))
@@ -2288,7 +2287,8 @@ static void restore_reserve_on_error(struct hstate *h,
 				 * See above comment about rare out of
 				 * memory condition.
 				 */
-				ClearPagePrivate(page);
+				hugetlb_clear_page_flag(page,
+						HP_Restore_Reserve);
 		} else
 			vma_end_reservation(h, vma, address);
 	}
@@ -2369,7 +2369,7 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
 		if (!page)
 			goto out_uncharge_cgroup;
 		if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
-			SetPagePrivate(page);
+			hugetlb_set_page_flag(page, HP_Restore_Reserve);
 			h->resv_huge_pages--;
 		}
 		spin_lock(&hugetlb_lock);
@@ -2387,7 +2387,7 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
 
 	spin_unlock(&hugetlb_lock);
 
-	set_page_private(page, (unsigned long)spool);
+	hugetlb_set_page_subpool(page, spool);
 
 	map_commit = vma_commit_reservation(h, vma, addr);
 	if (unlikely(map_chg > map_commit)) {
@@ -4212,7 +4212,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 	spin_lock(ptl);
 	ptep = huge_pte_offset(mm, haddr, huge_page_size(h));
 	if (likely(ptep && pte_same(huge_ptep_get(ptep), pte))) {
-		ClearPagePrivate(new_page);
+		hugetlb_clear_page_flag(new_page, HP_Restore_Reserve);
 
 		/* Break COW */
 		huge_ptep_clear_flush(vma, haddr, ptep);
@@ -4279,7 +4279,7 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
 
 	if (err)
 		return err;
-	ClearPagePrivate(page);
+	hugetlb_clear_page_flag(page, HP_Restore_Reserve);
 
 	/*
 	 * set page dirty so that it will not be removed from cache/file
@@ -4441,7 +4441,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 		goto backout;
 
 	if (anon_rmap) {
-		ClearPagePrivate(page);
+		hugetlb_clear_page_flag(page, HP_Restore_Reserve);
 		hugepage_add_new_anon_rmap(page, vma, haddr);
 	} else
 		page_dup_rmap(page, true);
@@ -4755,7 +4755,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
 	if (vm_shared) {
 		page_dup_rmap(page, true);
 	} else {
-		ClearPagePrivate(page);
+		hugetlb_clear_page_flag(page, HP_Restore_Reserve);
 		hugepage_add_new_anon_rmap(page, dst_vma, dst_addr);
 	}
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/5] hugetlb: convert page_huge_active() to HP_Migratable flag
  2021-01-16  0:31 [PATCH 0/5] create hugetlb flags to consolidate state Mike Kravetz
  2021-01-16  0:31 ` [PATCH 1/5] hugetlb: use page.private for hugetlb specific page flags Mike Kravetz
@ 2021-01-16  0:31 ` Mike Kravetz
  2021-01-16  4:24   ` Matthew Wilcox
  2021-01-16  0:31 ` [PATCH 3/5] hugetlb: only set HP_Migratable for migratable hstates Mike Kravetz
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Mike Kravetz @ 2021-01-16  0:31 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Michal Hocko, Naoya Horiguchi, Muchun Song, David Hildenbrand,
	Oscar Salvador, Matthew Wilcox, Andrew Morton, Mike Kravetz

Use the new hugetlb page specific flag HP_Migratable to replace the
page_huge_active interfaces.  By it's name, page_huge_active implied
that a huge page was on the active list.  However, that is not really
what code checking the flag wanted to know.  It really wanted to determine
if the huge page could be migrated.  This happens when the page is actually
added the page cache and/or task page table.  This is the reasoning behind
the name change.

The VM_BUG_ON_PAGE() calls in the *_huge_active() interfaces are not
really necessary as we KNOW the page is a hugetlb page.  Therefore, they
are removed.

The routine page_huge_active checked for PageHeadHuge before testing the
active bit.  This is unnecessary in the case where we hold a reference or
lock and know it is a hugetlb head page.  page_huge_active is also called
without holding a reference or lock (scan_movable_pages), and can race with
code freeing the page.  The extra check in page_huge_active shortened the
race window, but did not prevent the race.  Offline code calling
scan_movable_pages already deals with these races, so removing the check
is acceptable.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 fs/hugetlbfs/inode.c       |  2 +-
 include/linux/hugetlb.h    |  4 ++++
 include/linux/page-flags.h |  6 -----
 mm/hugetlb.c               | 45 ++++++++++----------------------------
 mm/memory_hotplug.c        |  8 ++++++-
 5 files changed, 23 insertions(+), 42 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index b8a661780c4a..89bc9062b4f6 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -735,7 +735,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
 
 		mutex_unlock(&hugetlb_fault_mutex_table[hash]);
 
-		set_page_huge_active(page);
+		hugetlb_set_page_flag(page, HP_Migratable);
 		/*
 		 * unlock_page because locked by add_to_page_cache()
 		 * put_page() due to reference from alloc_huge_page()
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 64f8c7a64186..353d81913cc7 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -480,9 +480,13 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
  * HP_Restore_Reserve - Set when a hugetlb page consumes a reservation at
  *	allocation time.  Cleared when page is fully instantiated.  Free
  *	routine checks flag to restore a reservation on error paths.
+ * HP_Migratable - Set after a newly allocated page is added to the page
+ *	cache and/or page tables.  Indicates the page is a candidate for
+ *	migration.
  */
 enum hugetlb_page_flags {
 	HP_Restore_Reserve = 0,
+	HP_Migratable,
 };
 
 #ifdef CONFIG_HUGETLB_PAGE
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index bc6fd1ee7dd6..04a34c08e0a6 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -592,15 +592,9 @@ static inline void ClearPageCompound(struct page *page)
 #ifdef CONFIG_HUGETLB_PAGE
 int PageHuge(struct page *page);
 int PageHeadHuge(struct page *page);
-bool page_huge_active(struct page *page);
 #else
 TESTPAGEFLAG_FALSE(Huge)
 TESTPAGEFLAG_FALSE(HeadHuge)
-
-static inline bool page_huge_active(struct page *page)
-{
-	return 0;
-}
 #endif
 
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b01002d8fc2b..c43cebf2f278 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1353,30 +1353,6 @@ struct hstate *size_to_hstate(unsigned long size)
 	return NULL;
 }
 
-/*
- * Test to determine whether the hugepage is "active/in-use" (i.e. being linked
- * to hstate->hugepage_activelist.)
- *
- * This function can be called for tail pages, but never returns true for them.
- */
-bool page_huge_active(struct page *page)
-{
-	return PageHeadHuge(page) && PagePrivate(&page[1]);
-}
-
-/* never called for tail page */
-void set_page_huge_active(struct page *page)
-{
-	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
-	SetPagePrivate(&page[1]);
-}
-
-static void clear_page_huge_active(struct page *page)
-{
-	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
-	ClearPagePrivate(&page[1]);
-}
-
 /*
  * Internal hugetlb specific page flag. Do not use outside of the hugetlb
  * code
@@ -1438,7 +1414,7 @@ static void __free_huge_page(struct page *page)
 	}
 
 	spin_lock(&hugetlb_lock);
-	clear_page_huge_active(page);
+	hugetlb_clear_page_flag(page, HP_Migratable);
 	hugetlb_cgroup_uncharge_page(hstate_index(h),
 				     pages_per_huge_page(h), page);
 	hugetlb_cgroup_uncharge_page_rsvd(hstate_index(h),
@@ -4221,7 +4197,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 				make_huge_pte(vma, new_page, 1));
 		page_remove_rmap(old_page, true);
 		hugepage_add_new_anon_rmap(new_page, vma, haddr);
-		set_page_huge_active(new_page);
+		hugetlb_set_page_flag(new_page, HP_Migratable);
 		/* Make the old page be freed below */
 		new_page = old_page;
 	}
@@ -4458,12 +4434,12 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 	spin_unlock(ptl);
 
 	/*
-	 * Only make newly allocated pages active.  Existing pages found
-	 * in the pagecache could be !page_huge_active() if they have been
-	 * isolated for migration.
+	 * Only set HP_Migratable in newly allocated pages.  Existing pages
+	 * found in the pagecache may not have HP_Migratable set if they have
+	 * been isolated for migration.
 	 */
 	if (new_page)
-		set_page_huge_active(page);
+		hugetlb_set_page_flag(page, HP_Migratable);
 
 	unlock_page(page);
 out:
@@ -4774,7 +4750,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
 	update_mmu_cache(dst_vma, dst_addr, dst_pte);
 
 	spin_unlock(ptl);
-	set_page_huge_active(page);
+	hugetlb_set_page_flag(page, HP_Migratable);
 	if (vm_shared)
 		unlock_page(page);
 	ret = 0;
@@ -5592,12 +5568,13 @@ bool isolate_huge_page(struct page *page, struct list_head *list)
 	bool ret = true;
 
 	spin_lock(&hugetlb_lock);
-	if (!PageHeadHuge(page) || !page_huge_active(page) ||
+	if (!PageHeadHuge(page) ||
+	    !hugetlb_test_page_flag(page, HP_Migratable) ||
 	    !get_page_unless_zero(page)) {
 		ret = false;
 		goto unlock;
 	}
-	clear_page_huge_active(page);
+	hugetlb_clear_page_flag(page, HP_Migratable);
 	list_move_tail(&page->lru, list);
 unlock:
 	spin_unlock(&hugetlb_lock);
@@ -5608,7 +5585,7 @@ void putback_active_hugepage(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHead(page), page);
 	spin_lock(&hugetlb_lock);
-	set_page_huge_active(page);
+	hugetlb_set_page_flag(page, HP_Migratable);
 	list_move_tail(&page->lru, &(page_hstate(page))->hugepage_activelist);
 	spin_unlock(&hugetlb_lock);
 	put_page(page);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index f9d57b9be8c7..10cdd281dd29 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1260,7 +1260,13 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
 		if (!PageHuge(page))
 			continue;
 		head = compound_head(page);
-		if (page_huge_active(head))
+		/*
+		 * This test is racy as we hold no reference or lock.  The
+		 * hugetlb page could have been free'ed and head is no longer
+		 * a hugetlb page before the following check.  In such unlikely
+		 * cases false positives and negatives are possible.
+		 */
+		if (hugetlb_test_page_flag(head, HP_Migratable))
 			goto found;
 		skip = compound_nr(head) - (page - head);
 		pfn += skip - 1;
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/5] hugetlb: only set HP_Migratable for migratable hstates
  2021-01-16  0:31 [PATCH 0/5] create hugetlb flags to consolidate state Mike Kravetz
  2021-01-16  0:31 ` [PATCH 1/5] hugetlb: use page.private for hugetlb specific page flags Mike Kravetz
  2021-01-16  0:31 ` [PATCH 2/5] hugetlb: convert page_huge_active() to HP_Migratable flag Mike Kravetz
@ 2021-01-16  0:31 ` Mike Kravetz
  2021-01-16  0:31 ` [PATCH 4/5] hugetlb: convert PageHugeTemporary() to HP_Temporary flag Mike Kravetz
  2021-01-16  0:31 ` [PATCH 5/5] hugetlb: convert PageHugeFreed to HP_Freed flag Mike Kravetz
  4 siblings, 0 replies; 10+ messages in thread
From: Mike Kravetz @ 2021-01-16  0:31 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Michal Hocko, Naoya Horiguchi, Muchun Song, David Hildenbrand,
	Oscar Salvador, Matthew Wilcox, Andrew Morton, Mike Kravetz

The HP_Migratable flag indicates a page is a candidate for migration.
Only set the flag if the page's hstate supports migration.  This allows
the migration paths to detect non-migratable pages earlier.  The check
in unmap_and_move_huge_page for migration support can be removed as it
is no longer necessary.  If migration is not supported for the hstate,
HP_Migratable will not be set, the page will not be isolated and no
attempt will be made to migrate.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 fs/hugetlbfs/inode.c    |  2 +-
 include/linux/hugetlb.h |  9 +++++++++
 mm/hugetlb.c            |  8 ++++----
 mm/migrate.c            | 12 ------------
 4 files changed, 14 insertions(+), 17 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 89bc9062b4f6..14d77d01e38d 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -735,7 +735,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
 
 		mutex_unlock(&hugetlb_fault_mutex_table[hash]);
 
-		hugetlb_set_page_flag(page, HP_Migratable);
+		hugetlb_set_HP_Migratable(page);
 		/*
 		 * unlock_page because locked by add_to_page_cache()
 		 * put_page() due to reference from alloc_huge_page()
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 353d81913cc7..e7157cf9967f 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -716,6 +716,15 @@ static inline bool hugepage_migration_supported(struct hstate *h)
 	return arch_hugetlb_migration_supported(h);
 }
 
+/*
+ * Only set flag if hstate supports migration
+ */
+static inline void hugetlb_set_HP_Migratable(struct page *page)
+{
+	if (hugepage_migration_supported(page_hstate(page)))
+		hugetlb_set_page_flag(page, HP_Migratable);
+}
+
 /*
  * Movability check is different as compared to migration check.
  * It determines whether or not a huge page should be placed on
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c43cebf2f278..31e896c70ba0 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4197,7 +4197,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 				make_huge_pte(vma, new_page, 1));
 		page_remove_rmap(old_page, true);
 		hugepage_add_new_anon_rmap(new_page, vma, haddr);
-		hugetlb_set_page_flag(new_page, HP_Migratable);
+		hugetlb_set_HP_Migratable(new_page);
 		/* Make the old page be freed below */
 		new_page = old_page;
 	}
@@ -4439,7 +4439,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
 	 * been isolated for migration.
 	 */
 	if (new_page)
-		hugetlb_set_page_flag(page, HP_Migratable);
+		hugetlb_set_HP_Migratable(page);
 
 	unlock_page(page);
 out:
@@ -4750,7 +4750,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
 	update_mmu_cache(dst_vma, dst_addr, dst_pte);
 
 	spin_unlock(ptl);
-	hugetlb_set_page_flag(page, HP_Migratable);
+	hugetlb_set_HP_Migratable(page);
 	if (vm_shared)
 		unlock_page(page);
 	ret = 0;
@@ -5585,7 +5585,7 @@ void putback_active_hugepage(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHead(page), page);
 	spin_lock(&hugetlb_lock);
-	hugetlb_set_page_flag(page, HP_Migratable);
+	hugetlb_set_HP_Migratable(page);
 	list_move_tail(&page->lru, &(page_hstate(page))->hugepage_activelist);
 	spin_unlock(&hugetlb_lock);
 	put_page(page);
diff --git a/mm/migrate.c b/mm/migrate.c
index 0339f3874d7c..296d61613abc 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1272,18 +1272,6 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 	struct anon_vma *anon_vma = NULL;
 	struct address_space *mapping = NULL;
 
-	/*
-	 * Migratability of hugepages depends on architectures and their size.
-	 * This check is necessary because some callers of hugepage migration
-	 * like soft offline and memory hotremove don't walk through page
-	 * tables or check whether the hugepage is pmd-based or not before
-	 * kicking migration.
-	 */
-	if (!hugepage_migration_supported(page_hstate(hpage))) {
-		list_move_tail(&hpage->lru, ret);
-		return -ENOSYS;
-	}
-
 	if (page_count(hpage) == 1) {
 		/* page was freed from under us. So we are done. */
 		putback_active_hugepage(hpage);
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/5] hugetlb: convert PageHugeTemporary() to HP_Temporary flag
  2021-01-16  0:31 [PATCH 0/5] create hugetlb flags to consolidate state Mike Kravetz
                   ` (2 preceding siblings ...)
  2021-01-16  0:31 ` [PATCH 3/5] hugetlb: only set HP_Migratable for migratable hstates Mike Kravetz
@ 2021-01-16  0:31 ` Mike Kravetz
  2021-01-16  0:31 ` [PATCH 5/5] hugetlb: convert PageHugeFreed to HP_Freed flag Mike Kravetz
  4 siblings, 0 replies; 10+ messages in thread
From: Mike Kravetz @ 2021-01-16  0:31 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Michal Hocko, Naoya Horiguchi, Muchun Song, David Hildenbrand,
	Oscar Salvador, Matthew Wilcox, Andrew Morton, Mike Kravetz

Use new hugetlb specific flag HP_Temporary flag to replace the
PageHugeTemporary() interfaces.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 include/linux/hugetlb.h |  5 +++++
 mm/hugetlb.c            | 36 +++++++-----------------------------
 2 files changed, 12 insertions(+), 29 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index e7157cf9967f..166825c85875 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -483,10 +483,15 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
  * HP_Migratable - Set after a newly allocated page is added to the page
  *	cache and/or page tables.  Indicates the page is a candidate for
  *	migration.
+ * HP_Temporary - Set on a page that is temporarily allocated from the buddy
+ *	allocator.  Typically used for migration target pages when no pages
+ *	are available in the pool.  The hugetlb free page path will
+ *	immediately free pages with this flag set to the buddy allocator.
  */
 enum hugetlb_page_flags {
 	HP_Restore_Reserve = 0,
 	HP_Migratable,
+	HP_Temporary,
 };
 
 #ifdef CONFIG_HUGETLB_PAGE
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 31e896c70ba0..53e9168a97bd 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1353,28 +1353,6 @@ struct hstate *size_to_hstate(unsigned long size)
 	return NULL;
 }
 
-/*
- * Internal hugetlb specific page flag. Do not use outside of the hugetlb
- * code
- */
-static inline bool PageHugeTemporary(struct page *page)
-{
-	if (!PageHuge(page))
-		return false;
-
-	return (unsigned long)page[2].mapping == -1U;
-}
-
-static inline void SetPageHugeTemporary(struct page *page)
-{
-	page[2].mapping = (void *)-1U;
-}
-
-static inline void ClearPageHugeTemporary(struct page *page)
-{
-	page[2].mapping = NULL;
-}
-
 static void __free_huge_page(struct page *page)
 {
 	/*
@@ -1422,9 +1400,9 @@ static void __free_huge_page(struct page *page)
 	if (restore_reserve)
 		h->resv_huge_pages++;
 
-	if (PageHugeTemporary(page)) {
+	if (hugetlb_test_page_flag(page, HP_Temporary)) {
 		list_del(&page->lru);
-		ClearPageHugeTemporary(page);
+		hugetlb_clear_page_flag(page, HP_Temporary);
 		update_and_free_page(h, page);
 	} else if (h->surplus_huge_pages_node[nid]) {
 		/* remove the page from active list */
@@ -1863,7 +1841,7 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask,
 	 * codeflow
 	 */
 	if (h->surplus_huge_pages >= h->nr_overcommit_huge_pages) {
-		SetPageHugeTemporary(page);
+		hugetlb_set_page_flag(page, HP_Temporary);
 		spin_unlock(&hugetlb_lock);
 		put_page(page);
 		return NULL;
@@ -1894,7 +1872,7 @@ static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask,
 	 * We do not account these pages as surplus because they are only
 	 * temporary and will be released properly on the last reference
 	 */
-	SetPageHugeTemporary(page);
+	hugetlb_set_page_flag(page, HP_Temporary);
 
 	return page;
 }
@@ -5608,12 +5586,12 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason)
 	 * here as well otherwise the global surplus count will not match
 	 * the per-node's.
 	 */
-	if (PageHugeTemporary(newpage)) {
+	if (hugetlb_test_page_flag(newpage, HP_Temporary)) {
 		int old_nid = page_to_nid(oldpage);
 		int new_nid = page_to_nid(newpage);
 
-		SetPageHugeTemporary(oldpage);
-		ClearPageHugeTemporary(newpage);
+		hugetlb_set_page_flag(oldpage, HP_Temporary);
+		hugetlb_clear_page_flag(newpage, HP_Temporary);
 
 		spin_lock(&hugetlb_lock);
 		if (h->surplus_huge_pages_node[old_nid]) {
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/5] hugetlb: convert PageHugeFreed to HP_Freed flag
  2021-01-16  0:31 [PATCH 0/5] create hugetlb flags to consolidate state Mike Kravetz
                   ` (3 preceding siblings ...)
  2021-01-16  0:31 ` [PATCH 4/5] hugetlb: convert PageHugeTemporary() to HP_Temporary flag Mike Kravetz
@ 2021-01-16  0:31 ` Mike Kravetz
  4 siblings, 0 replies; 10+ messages in thread
From: Mike Kravetz @ 2021-01-16  0:31 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Michal Hocko, Naoya Horiguchi, Muchun Song, David Hildenbrand,
	Oscar Salvador, Matthew Wilcox, Andrew Morton, Mike Kravetz

Use new hugetlb specific flag HP_Freed flag to replace the
PageHugeFreed interfaces.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 include/linux/hugetlb.h |  2 ++
 mm/hugetlb.c            | 23 ++++-------------------
 2 files changed, 6 insertions(+), 19 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 166825c85875..5c99969fbbd6 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -487,11 +487,13 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
  *	allocator.  Typically used for migration target pages when no pages
  *	are available in the pool.  The hugetlb free page path will
  *	immediately free pages with this flag set to the buddy allocator.
+ * HP_Freed - Set when page is on the free lists.
  */
 enum hugetlb_page_flags {
 	HP_Restore_Reserve = 0,
 	HP_Migratable,
 	HP_Temporary,
+	HP_Freed,
 };
 
 #ifdef CONFIG_HUGETLB_PAGE
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 53e9168a97bd..073137b32657 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -79,21 +79,6 @@ DEFINE_SPINLOCK(hugetlb_lock);
 static int num_fault_mutexes;
 struct mutex *hugetlb_fault_mutex_table ____cacheline_aligned_in_smp;
 
-static inline bool PageHugeFreed(struct page *head)
-{
-	return page_private(head + 4) == -1UL;
-}
-
-static inline void SetPageHugeFreed(struct page *head)
-{
-	set_page_private(head + 4, -1UL);
-}
-
-static inline void ClearPageHugeFreed(struct page *head)
-{
-	set_page_private(head + 4, 0);
-}
-
 /* Forward declaration */
 static int hugetlb_acct_memory(struct hstate *h, long delta);
 
@@ -1043,7 +1028,7 @@ static void enqueue_huge_page(struct hstate *h, struct page *page)
 	list_move(&page->lru, &h->hugepage_freelists[nid]);
 	h->free_huge_pages++;
 	h->free_huge_pages_node[nid]++;
-	SetPageHugeFreed(page);
+	hugetlb_set_page_flag(page, HP_Freed);
 }
 
 static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
@@ -1060,7 +1045,7 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
 
 		list_move(&page->lru, &h->hugepage_activelist);
 		set_page_refcounted(page);
-		ClearPageHugeFreed(page);
+		hugetlb_clear_page_flag(page, HP_Freed);
 		h->free_huge_pages--;
 		h->free_huge_pages_node[nid]--;
 		return page;
@@ -1474,7 +1459,7 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
 	spin_lock(&hugetlb_lock);
 	h->nr_huge_pages++;
 	h->nr_huge_pages_node[nid]++;
-	ClearPageHugeFreed(page);
+	hugetlb_clear_page_flag(page, HP_Freed);
 	spin_unlock(&hugetlb_lock);
 }
 
@@ -1747,7 +1732,7 @@ int dissolve_free_huge_page(struct page *page)
 		 * We should make sure that the page is already on the free list
 		 * when it is dissolved.
 		 */
-		if (unlikely(!PageHugeFreed(head))) {
+		if (unlikely(!hugetlb_test_page_flag(head, HP_Freed))) {
 			rc = -EAGAIN;
 			goto out;
 		}
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/5] hugetlb: convert page_huge_active() to HP_Migratable flag
  2021-01-16  0:31 ` [PATCH 2/5] hugetlb: convert page_huge_active() to HP_Migratable flag Mike Kravetz
@ 2021-01-16  4:24   ` Matthew Wilcox
  2021-01-16  4:36     ` [External] " Muchun Song
  2021-01-16 12:06     ` Oscar Salvador
  0 siblings, 2 replies; 10+ messages in thread
From: Matthew Wilcox @ 2021-01-16  4:24 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: linux-kernel, linux-mm, Michal Hocko, Naoya Horiguchi,
	Muchun Song, David Hildenbrand, Oscar Salvador, Andrew Morton

On Fri, Jan 15, 2021 at 04:31:02PM -0800, Mike Kravetz wrote:
> +++ b/fs/hugetlbfs/inode.c
> @@ -735,7 +735,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
>  
>  		mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>  
> -		set_page_huge_active(page);
> +		hugetlb_set_page_flag(page, HP_Migratable);

I had understood the request to be more like ...

		SetHPageMigratable(page);

> +++ b/include/linux/hugetlb.h
> @@ -480,9 +480,13 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
>   * HP_Restore_Reserve - Set when a hugetlb page consumes a reservation at
>   *	allocation time.  Cleared when page is fully instantiated.  Free
>   *	routine checks flag to restore a reservation on error paths.
> + * HP_Migratable - Set after a newly allocated page is added to the page
> + *	cache and/or page tables.  Indicates the page is a candidate for
> + *	migration.
>   */
>  enum hugetlb_page_flags {
>  	HP_Restore_Reserve = 0,
> +	HP_Migratable,
>  };

and name these HPG_restore_reserve and HPG_migratable

and generate the calls to hugetlb_set_page_flag etc from macros, eg:

#define TESTHPAGEFLAG(uname, lname)					\
static __always_inline bool HPage##uname(struct page *page)		\
{ return test_bit(HPG_##lname, &page->private); }
...
#define HPAGEFLAG(uname, lname)						\
	TESTHPAGEFLAG(uname, lname)					\
	SETHPAGEFLAG(uname, lname)					\
	CLEARHPAGEFLAG(uname, lname)

HPAGEFLAG(RestoreReserve, restore_reserve)
HPAGEFLAG(Migratable, migratable)

just to mirror page-flags.h more closely.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [External] Re: [PATCH 2/5] hugetlb: convert page_huge_active() to HP_Migratable flag
  2021-01-16  4:24   ` Matthew Wilcox
@ 2021-01-16  4:36     ` Muchun Song
  2021-01-16 12:06     ` Oscar Salvador
  1 sibling, 0 replies; 10+ messages in thread
From: Muchun Song @ 2021-01-16  4:36 UTC (permalink / raw)
  To: Matthew Wilcox, Mike Kravetz
  Cc: LKML, Linux Memory Management List, Michal Hocko,
	Naoya Horiguchi, David Hildenbrand, Oscar Salvador,
	Andrew Morton

On Sat, Jan 16, 2021 at 12:26 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Fri, Jan 15, 2021 at 04:31:02PM -0800, Mike Kravetz wrote:
> > +++ b/fs/hugetlbfs/inode.c
> > @@ -735,7 +735,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
> >
> >               mutex_unlock(&hugetlb_fault_mutex_table[hash]);
> >
> > -             set_page_huge_active(page);
> > +             hugetlb_set_page_flag(page, HP_Migratable);
>
> I had understood the request to be more like ...
>
>                 SetHPageMigratable(page);
>
> > +++ b/include/linux/hugetlb.h
> > @@ -480,9 +480,13 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
> >   * HP_Restore_Reserve - Set when a hugetlb page consumes a reservation at
> >   *   allocation time.  Cleared when page is fully instantiated.  Free
> >   *   routine checks flag to restore a reservation on error paths.
> > + * HP_Migratable - Set after a newly allocated page is added to the page
> > + *   cache and/or page tables.  Indicates the page is a candidate for
> > + *   migration.
> >   */
> >  enum hugetlb_page_flags {
> >       HP_Restore_Reserve = 0,
> > +     HP_Migratable,
> >  };
>
> and name these HPG_restore_reserve and HPG_migratable
>
> and generate the calls to hugetlb_set_page_flag etc from macros, eg:
>
> #define TESTHPAGEFLAG(uname, lname)                                     \
> static __always_inline bool HPage##uname(struct page *page)             \
> { return test_bit(HPG_##lname, &page->private); }
> ...
> #define HPAGEFLAG(uname, lname)                                         \
>         TESTHPAGEFLAG(uname, lname)                                     \
>         SETHPAGEFLAG(uname, lname)                                      \
>         CLEARHPAGEFLAG(uname, lname)
>
> HPAGEFLAG(RestoreReserve, restore_reserve)
> HPAGEFLAG(Migratable, migratable)
>
> just to mirror page-flags.h more closely.

I prefer this suggestion. I also made the same suggestion in the
previous RFC version.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/5] hugetlb: convert page_huge_active() to HP_Migratable flag
  2021-01-16  4:24   ` Matthew Wilcox
  2021-01-16  4:36     ` [External] " Muchun Song
@ 2021-01-16 12:06     ` Oscar Salvador
  2021-01-16 21:53       ` Mike Kravetz
  1 sibling, 1 reply; 10+ messages in thread
From: Oscar Salvador @ 2021-01-16 12:06 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Mike Kravetz, linux-kernel, linux-mm, Michal Hocko,
	Naoya Horiguchi, Muchun Song, David Hildenbrand, Andrew Morton

On Sat, Jan 16, 2021 at 04:24:16AM +0000, Matthew Wilcox wrote:
> and name these HPG_restore_reserve and HPG_migratable
> 
> and generate the calls to hugetlb_set_page_flag etc from macros, eg:
> 
> #define TESTHPAGEFLAG(uname, lname)					\
> static __always_inline bool HPage##uname(struct page *page)		\
> { return test_bit(HPG_##lname, &page->private); }
> ...
> #define HPAGEFLAG(uname, lname)						\
> 	TESTHPAGEFLAG(uname, lname)					\
> 	SETHPAGEFLAG(uname, lname)					\
> 	CLEARHPAGEFLAG(uname, lname)
> 
> HPAGEFLAG(RestoreReserve, restore_reserve)
> HPAGEFLAG(Migratable, migratable)
> 
> just to mirror page-flags.h more closely.

That is on me.
I thought that given the low number of flags, we coud get away with:

hugetlb_{set,test,clear}_page_flag(page, flag)

and call it from the code.
But some of the flags need to be set/tested outside hugetlb code, so
it indeed looks nicer and more consistent to follow page-flags.h convention.

Sorry for the noise.

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/5] hugetlb: convert page_huge_active() to HP_Migratable flag
  2021-01-16 12:06     ` Oscar Salvador
@ 2021-01-16 21:53       ` Mike Kravetz
  0 siblings, 0 replies; 10+ messages in thread
From: Mike Kravetz @ 2021-01-16 21:53 UTC (permalink / raw)
  To: Oscar Salvador, Matthew Wilcox
  Cc: linux-kernel, linux-mm, Michal Hocko, Naoya Horiguchi,
	Muchun Song, David Hildenbrand, Andrew Morton

On 1/16/21 4:06 AM, Oscar Salvador wrote:
> On Sat, Jan 16, 2021 at 04:24:16AM +0000, Matthew Wilcox wrote:
>> and name these HPG_restore_reserve and HPG_migratable
>>
>> and generate the calls to hugetlb_set_page_flag etc from macros, eg:
>>
>> #define TESTHPAGEFLAG(uname, lname)					\
>> static __always_inline bool HPage##uname(struct page *page)		\
>> { return test_bit(HPG_##lname, &page->private); }
>> ...
>> #define HPAGEFLAG(uname, lname)						\
>> 	TESTHPAGEFLAG(uname, lname)					\
>> 	SETHPAGEFLAG(uname, lname)					\
>> 	CLEARHPAGEFLAG(uname, lname)
>>
>> HPAGEFLAG(RestoreReserve, restore_reserve)
>> HPAGEFLAG(Migratable, migratable)
>>
>> just to mirror page-flags.h more closely.
> 
> That is on me.
> I thought that given the low number of flags, we coud get away with:
> 
> hugetlb_{set,test,clear}_page_flag(page, flag)
> 
> and call it from the code.
> But some of the flags need to be set/tested outside hugetlb code, so
> it indeed looks nicer and more consistent to follow page-flags.h convention.
> 
> Sorry for the noise.

Thanks everyone!

I was unsure about the best way to go for this.  Will send out a new version
in a few days using the page-flag style macros.

-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-01-16 21:55 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-16  0:31 [PATCH 0/5] create hugetlb flags to consolidate state Mike Kravetz
2021-01-16  0:31 ` [PATCH 1/5] hugetlb: use page.private for hugetlb specific page flags Mike Kravetz
2021-01-16  0:31 ` [PATCH 2/5] hugetlb: convert page_huge_active() to HP_Migratable flag Mike Kravetz
2021-01-16  4:24   ` Matthew Wilcox
2021-01-16  4:36     ` [External] " Muchun Song
2021-01-16 12:06     ` Oscar Salvador
2021-01-16 21:53       ` Mike Kravetz
2021-01-16  0:31 ` [PATCH 3/5] hugetlb: only set HP_Migratable for migratable hstates Mike Kravetz
2021-01-16  0:31 ` [PATCH 4/5] hugetlb: convert PageHugeTemporary() to HP_Temporary flag Mike Kravetz
2021-01-16  0:31 ` [PATCH 5/5] hugetlb: convert PageHugeFreed to HP_Freed flag Mike Kravetz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).