linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] migrate: convert migrate_pages()/unmap_and_move() to use folios
@ 2022-11-04  8:30 Huang Ying
  2022-11-04  8:30 ` [PATCH 1/2] migrate: convert unmap_and_move() " Huang Ying
  2022-11-04  8:30 ` [PATCH 2/2] migrate: convert migrate_pages() " Huang Ying
  0 siblings, 2 replies; 9+ messages in thread
From: Huang Ying @ 2022-11-04  8:30 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Andrew Morton, Huang Ying, Zi Yan, Yang Shi,
	Baolin Wang, Oscar Salvador, Matthew Wilcox

The conversion is quite straightforward, just replace the page API to
the corresponding folio API.  migrate_pages() and unmap_and_move()
mostly work with folios (head pages) only.

Changelog:

rfc -> v1:

- Change API to test transhuge and hugetlb, per Matthew and Zi's comments.

- Make THP related statistics for THP only, per Matthew's comments.

Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/2] migrate: convert unmap_and_move() to use folios
  2022-11-04  8:30 [PATCH 0/2] migrate: convert migrate_pages()/unmap_and_move() to use folios Huang Ying
@ 2022-11-04  8:30 ` Huang Ying
  2022-11-07  7:26   ` Baolin Wang
                     ` (3 more replies)
  2022-11-04  8:30 ` [PATCH 2/2] migrate: convert migrate_pages() " Huang Ying
  1 sibling, 4 replies; 9+ messages in thread
From: Huang Ying @ 2022-11-04  8:30 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Andrew Morton, Huang Ying, Zi Yan, Yang Shi,
	Baolin Wang, Oscar Salvador, Matthew Wilcox

Quite straightforward, the page functions are converted to
corresponding folio functions.  Same for comments.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Matthew Wilcox <willy@infradead.org>
---
 mm/migrate.c | 54 ++++++++++++++++++++++++++--------------------------
 1 file changed, 27 insertions(+), 27 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index dff333593a8a..f6dd749dd2f8 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1150,79 +1150,79 @@ static int __unmap_and_move(struct folio *src, struct folio *dst,
 }
 
 /*
- * Obtain the lock on page, remove all ptes and migrate the page
- * to the newly allocated page in newpage.
+ * Obtain the lock on folio, remove all ptes and migrate the folio
+ * to the newly allocated folio in dst.
  */
 static int unmap_and_move(new_page_t get_new_page,
 				   free_page_t put_new_page,
-				   unsigned long private, struct page *page,
+				   unsigned long private, struct folio *src,
 				   int force, enum migrate_mode mode,
 				   enum migrate_reason reason,
 				   struct list_head *ret)
 {
-	struct folio *dst, *src = page_folio(page);
+	struct folio *dst;
 	int rc = MIGRATEPAGE_SUCCESS;
 	struct page *newpage = NULL;
 
-	if (!thp_migration_supported() && PageTransHuge(page))
+	if (!thp_migration_supported() && folio_test_transhuge(src))
 		return -ENOSYS;
 
-	if (page_count(page) == 1) {
-		/* Page was freed from under us. So we are done. */
-		ClearPageActive(page);
-		ClearPageUnevictable(page);
+	if (folio_ref_count(src) == 1) {
+		/* Folio was freed from under us. So we are done. */
+		folio_clear_active(src);
+		folio_clear_unevictable(src);
 		/* free_pages_prepare() will clear PG_isolated. */
 		goto out;
 	}
 
-	newpage = get_new_page(page, private);
+	newpage = get_new_page(&src->page, private);
 	if (!newpage)
 		return -ENOMEM;
 	dst = page_folio(newpage);
 
-	newpage->private = 0;
+	dst->private = 0;
 	rc = __unmap_and_move(src, dst, force, mode);
 	if (rc == MIGRATEPAGE_SUCCESS)
-		set_page_owner_migrate_reason(newpage, reason);
+		set_page_owner_migrate_reason(&dst->page, reason);
 
 out:
 	if (rc != -EAGAIN) {
 		/*
-		 * A page that has been migrated has all references
-		 * removed and will be freed. A page that has not been
+		 * A folio that has been migrated has all references
+		 * removed and will be freed. A folio that has not been
 		 * migrated will have kept its references and be restored.
 		 */
-		list_del(&page->lru);
+		list_del(&src->lru);
 	}
 
 	/*
 	 * If migration is successful, releases reference grabbed during
-	 * isolation. Otherwise, restore the page to right list unless
+	 * isolation. Otherwise, restore the folio to right list unless
 	 * we want to retry.
 	 */
 	if (rc == MIGRATEPAGE_SUCCESS) {
 		/*
-		 * Compaction can migrate also non-LRU pages which are
+		 * Compaction can migrate also non-LRU folios which are
 		 * not accounted to NR_ISOLATED_*. They can be recognized
-		 * as __PageMovable
+		 * as __folio_test_movable
 		 */
-		if (likely(!__PageMovable(page)))
-			mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON +
-					page_is_file_lru(page), -thp_nr_pages(page));
+		if (likely(!__folio_test_movable(src)))
+			mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON +
+					folio_is_file_lru(src), -folio_nr_pages(src));
 
 		if (reason != MR_MEMORY_FAILURE)
 			/*
-			 * We release the page in page_handle_poison.
+			 * We release the folio in page_handle_poison.
 			 */
-			put_page(page);
+			folio_put(src);
 	} else {
 		if (rc != -EAGAIN)
-			list_add_tail(&page->lru, ret);
+			list_add_tail(&src->lru, ret);
 
 		if (put_new_page)
-			put_new_page(newpage, private);
+			put_new_page(&dst->page, private);
 		else
-			put_page(newpage);
+			folio_put(dst);
 	}
 
 	return rc;
@@ -1459,7 +1459,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
 						&ret_pages);
 			else
 				rc = unmap_and_move(get_new_page, put_new_page,
-						private, page, pass > 2, mode,
+						private, page_folio(page), pass > 2, mode,
 						reason, &ret_pages);
 			/*
 			 * The rules are:
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/2] migrate: convert migrate_pages() to use folios
  2022-11-04  8:30 [PATCH 0/2] migrate: convert migrate_pages()/unmap_and_move() to use folios Huang Ying
  2022-11-04  8:30 ` [PATCH 1/2] migrate: convert unmap_and_move() " Huang Ying
@ 2022-11-04  8:30 ` Huang Ying
  2022-11-07  8:13   ` Baolin Wang
  1 sibling, 1 reply; 9+ messages in thread
From: Huang Ying @ 2022-11-04  8:30 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Andrew Morton, Huang Ying, Zi Yan, Yang Shi,
	Baolin Wang, Oscar Salvador, Matthew Wilcox

Quite straightforward, the page functions are converted to
corresponding folio functions.  Same for comments.

THP specific code are converted to be large folio.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Matthew Wilcox <willy@infradead.org>
---
 mm/migrate.c | 201 +++++++++++++++++++++++++++------------------------
 1 file changed, 107 insertions(+), 94 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index f6dd749dd2f8..b41289ef3b65 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1373,218 +1373,231 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
 	return rc;
 }
 
-static inline int try_split_thp(struct page *page, struct list_head *split_pages)
+static inline int try_split_folio(struct folio *folio, struct list_head *split_folios)
 {
 	int rc;
 
-	lock_page(page);
-	rc = split_huge_page_to_list(page, split_pages);
-	unlock_page(page);
+	folio_lock(folio);
+	rc = split_folio_to_list(folio, split_folios);
+	folio_unlock(folio);
 	if (!rc)
-		list_move_tail(&page->lru, split_pages);
+		list_move_tail(&folio->lru, split_folios);
 
 	return rc;
 }
 
 /*
- * migrate_pages - migrate the pages specified in a list, to the free pages
+ * migrate_pages - migrate the folios specified in a list, to the free folios
  *		   supplied as the target for the page migration
  *
- * @from:		The list of pages to be migrated.
- * @get_new_page:	The function used to allocate free pages to be used
- *			as the target of the page migration.
- * @put_new_page:	The function used to free target pages if migration
+ * @from:		The list of folios to be migrated.
+ * @get_new_page:	The function used to allocate free folios to be used
+ *			as the target of the folio migration.
+ * @put_new_page:	The function used to free target folios if migration
  *			fails, or NULL if no special handling is necessary.
  * @private:		Private data to be passed on to get_new_page()
  * @mode:		The migration mode that specifies the constraints for
- *			page migration, if any.
- * @reason:		The reason for page migration.
- * @ret_succeeded:	Set to the number of normal pages migrated successfully if
+ *			folio migration, if any.
+ * @reason:		The reason for folio migration.
+ * @ret_succeeded:	Set to the number of folios migrated successfully if
  *			the caller passes a non-NULL pointer.
  *
- * The function returns after 10 attempts or if no pages are movable any more
- * because the list has become empty or no retryable pages exist any more.
- * It is caller's responsibility to call putback_movable_pages() to return pages
+ * The function returns after 10 attempts or if no folios are movable any more
+ * because the list has become empty or no retryable folios exist any more.
+ * It is caller's responsibility to call putback_movable_pages() to return folios
  * to the LRU or free list only if ret != 0.
  *
- * Returns the number of {normal page, THP, hugetlb} that were not migrated, or
- * an error code. The number of THP splits will be considered as the number of
- * non-migrated THP, no matter how many subpages of the THP are migrated successfully.
+ * Returns the number of {normal folio, large folio, hugetlb} that were not
+ * migrated, or an error code. The number of large folio splits will be
+ * considered as the number of non-migrated large folio, no matter how many
+ * split folios of the large folio are migrated successfully.
  */
 int migrate_pages(struct list_head *from, new_page_t get_new_page,
 		free_page_t put_new_page, unsigned long private,
 		enum migrate_mode mode, int reason, unsigned int *ret_succeeded)
 {
 	int retry = 1;
+	int large_retry = 1;
 	int thp_retry = 1;
 	int nr_failed = 0;
 	int nr_failed_pages = 0;
 	int nr_retry_pages = 0;
 	int nr_succeeded = 0;
 	int nr_thp_succeeded = 0;
+	int nr_large_failed = 0;
 	int nr_thp_failed = 0;
 	int nr_thp_split = 0;
 	int pass = 0;
+	bool is_large = false;
 	bool is_thp = false;
-	struct page *page;
-	struct page *page2;
-	int rc, nr_subpages;
-	LIST_HEAD(ret_pages);
-	LIST_HEAD(thp_split_pages);
+	struct folio *folio, *folio2;
+	int rc, nr_pages;
+	LIST_HEAD(ret_folios);
+	LIST_HEAD(split_folios);
 	bool nosplit = (reason == MR_NUMA_MISPLACED);
-	bool no_subpage_counting = false;
+	bool no_split_folio_counting = false;
 
 	trace_mm_migrate_pages_start(mode, reason);
 
-thp_subpage_migration:
-	for (pass = 0; pass < 10 && (retry || thp_retry); pass++) {
+split_folio_migration:
+	for (pass = 0; pass < 10 && (retry || large_retry); pass++) {
 		retry = 0;
+		large_retry = 0;
 		thp_retry = 0;
 		nr_retry_pages = 0;
 
-		list_for_each_entry_safe(page, page2, from, lru) {
+		list_for_each_entry_safe(folio, folio2, from, lru) {
 			/*
-			 * THP statistics is based on the source huge page.
-			 * Capture required information that might get lost
-			 * during migration.
+			 * large folio statistics is based on the source large
+			 * folio. Capture required information that might get
+			 * lost during migration.
 			 */
-			is_thp = PageTransHuge(page) && !PageHuge(page);
-			nr_subpages = compound_nr(page);
+			is_large = folio_test_large(folio) && !folio_test_hugetlb(folio);
+			is_thp = is_large && folio_test_pmd_mappable(folio);
+			nr_pages = folio_nr_pages(folio);
 			cond_resched();
 
-			if (PageHuge(page))
+			if (folio_test_hugetlb(folio))
 				rc = unmap_and_move_huge_page(get_new_page,
-						put_new_page, private, page,
-						pass > 2, mode, reason,
-						&ret_pages);
+						put_new_page, private,
+						&folio->page, pass > 2, mode,
+						reason,
+						&ret_folios);
 			else
 				rc = unmap_and_move(get_new_page, put_new_page,
-						private, page_folio(page), pass > 2, mode,
-						reason, &ret_pages);
+						private, folio, pass > 2, mode,
+						reason, &ret_folios);
 			/*
 			 * The rules are:
-			 *	Success: non hugetlb page will be freed, hugetlb
-			 *		 page will be put back
+			 *	Success: non hugetlb folio will be freed, hugetlb
+			 *		 folio will be put back
 			 *	-EAGAIN: stay on the from list
 			 *	-ENOMEM: stay on the from list
 			 *	-ENOSYS: stay on the from list
-			 *	Other errno: put on ret_pages list then splice to
+			 *	Other errno: put on ret_folios list then splice to
 			 *		     from list
 			 */
 			switch(rc) {
 			/*
-			 * THP migration might be unsupported or the
-			 * allocation could've failed so we should
-			 * retry on the same page with the THP split
-			 * to base pages.
+			 * Large folio migration might be unsupported or
+			 * the allocation could've failed so we should retry
+			 * on the same folio with the large folio split
+			 * to normal folios.
 			 *
-			 * Sub-pages are put in thp_split_pages, and
+			 * Split folios are put in split_folios, and
 			 * we will migrate them after the rest of the
 			 * list is processed.
 			 */
 			case -ENOSYS:
-				/* THP migration is unsupported */
-				if (is_thp) {
-					nr_thp_failed++;
-					if (!try_split_thp(page, &thp_split_pages)) {
-						nr_thp_split++;
+				/* Large folio migration is unsupported */
+				if (is_large) {
+					nr_large_failed++;
+					nr_thp_failed += is_thp;
+					if (!try_split_folio(folio, &split_folios)) {
+						nr_thp_split += is_thp;
 						break;
 					}
 				/* Hugetlb migration is unsupported */
-				} else if (!no_subpage_counting) {
+				} else if (!no_split_folio_counting) {
 					nr_failed++;
 				}
 
-				nr_failed_pages += nr_subpages;
-				list_move_tail(&page->lru, &ret_pages);
+				nr_failed_pages += nr_pages;
+				list_move_tail(&folio->lru, &ret_folios);
 				break;
 			case -ENOMEM:
 				/*
 				 * When memory is low, don't bother to try to migrate
-				 * other pages, just exit.
+				 * other folios, just exit.
 				 */
-				if (is_thp) {
-					nr_thp_failed++;
-					/* THP NUMA faulting doesn't split THP to retry. */
-					if (!nosplit && !try_split_thp(page, &thp_split_pages)) {
-						nr_thp_split++;
+				if (is_large) {
+					nr_large_failed++;
+					nr_thp_failed += is_thp;
+					/* Large folio NUMA faulting doesn't split to retry. */
+					if (!nosplit && !try_split_folio(folio, &split_folios)) {
+						nr_thp_split += is_thp;
 						break;
 					}
-				} else if (!no_subpage_counting) {
+				} else if (!no_split_folio_counting) {
 					nr_failed++;
 				}
 
-				nr_failed_pages += nr_subpages + nr_retry_pages;
+				nr_failed_pages += nr_pages + nr_retry_pages;
 				/*
-				 * There might be some subpages of fail-to-migrate THPs
-				 * left in thp_split_pages list. Move them back to migration
+				 * There might be some split folios of fail-to-migrate large
+				 * folios left in split_folios list. Move them back to migration
 				 * list so that they could be put back to the right list by
-				 * the caller otherwise the page refcnt will be leaked.
+				 * the caller otherwise the folio refcnt will be leaked.
 				 */
-				list_splice_init(&thp_split_pages, from);
+				list_splice_init(&split_folios, from);
 				/* nr_failed isn't updated for not used */
+				nr_large_failed += large_retry;
 				nr_thp_failed += thp_retry;
 				goto out;
 			case -EAGAIN:
-				if (is_thp)
-					thp_retry++;
-				else if (!no_subpage_counting)
+				if (is_large) {
+					large_retry++;
+					thp_retry += is_thp;
+				} else if (!no_split_folio_counting) {
 					retry++;
-				nr_retry_pages += nr_subpages;
+				}
+				nr_retry_pages += nr_pages;
 				break;
 			case MIGRATEPAGE_SUCCESS:
-				nr_succeeded += nr_subpages;
-				if (is_thp)
-					nr_thp_succeeded++;
+				nr_succeeded += nr_pages;
+				nr_thp_succeeded += is_thp;
 				break;
 			default:
 				/*
 				 * Permanent failure (-EBUSY, etc.):
-				 * unlike -EAGAIN case, the failed page is
-				 * removed from migration page list and not
+				 * unlike -EAGAIN case, the failed folio is
+				 * removed from migration folio list and not
 				 * retried in the next outer loop.
 				 */
-				if (is_thp)
-					nr_thp_failed++;
-				else if (!no_subpage_counting)
+				if (is_large) {
+					nr_large_failed++;
+					nr_thp_failed += is_thp;
+				} else if (!no_split_folio_counting) {
 					nr_failed++;
+				}
 
-				nr_failed_pages += nr_subpages;
+				nr_failed_pages += nr_pages;
 				break;
 			}
 		}
 	}
 	nr_failed += retry;
+	nr_large_failed += large_retry;
 	nr_thp_failed += thp_retry;
 	nr_failed_pages += nr_retry_pages;
 	/*
-	 * Try to migrate subpages of fail-to-migrate THPs, no nr_failed
-	 * counting in this round, since all subpages of a THP is counted
-	 * as 1 failure in the first round.
+	 * Try to migrate split folios of fail-to-migrate large folios, no
+	 * nr_failed counting in this round, since all split folios of a
+	 * large folio is counted as 1 failure in the first round.
 	 */
-	if (!list_empty(&thp_split_pages)) {
+	if (!list_empty(&split_folios)) {
 		/*
-		 * Move non-migrated pages (after 10 retries) to ret_pages
+		 * Move non-migrated folios (after 10 retries) to ret_folios
 		 * to avoid migrating them again.
 		 */
-		list_splice_init(from, &ret_pages);
-		list_splice_init(&thp_split_pages, from);
-		no_subpage_counting = true;
+		list_splice_init(from, &ret_folios);
+		list_splice_init(&split_folios, from);
+		no_split_folio_counting = true;
 		retry = 1;
-		goto thp_subpage_migration;
+		goto split_folio_migration;
 	}
 
-	rc = nr_failed + nr_thp_failed;
+	rc = nr_failed + nr_large_failed;
 out:
 	/*
-	 * Put the permanent failure page back to migration list, they
+	 * Put the permanent failure folio back to migration list, they
 	 * will be put back to the right list by the caller.
 	 */
-	list_splice(&ret_pages, from);
+	list_splice(&ret_folios, from);
 
 	/*
-	 * Return 0 in case all subpages of fail-to-migrate THPs are
-	 * migrated successfully.
+	 * Return 0 in case all split folios of fail-to-migrate large folios
+	 * are migrated successfully.
 	 */
 	if (list_empty(from))
 		rc = 0;
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] migrate: convert unmap_and_move() to use folios
  2022-11-04  8:30 ` [PATCH 1/2] migrate: convert unmap_and_move() " Huang Ying
@ 2022-11-07  7:26   ` Baolin Wang
  2022-11-07 13:49   ` Matthew Wilcox
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 9+ messages in thread
From: Baolin Wang @ 2022-11-07  7:26 UTC (permalink / raw)
  To: Huang Ying, linux-mm
  Cc: linux-kernel, Andrew Morton, Zi Yan, Yang Shi, Oscar Salvador,
	Matthew Wilcox



On 11/4/2022 4:30 PM, Huang Ying wrote:
> Quite straightforward, the page functions are converted to
> corresponding folio functions.  Same for comments.
>

LGTM. Please feel free to add:
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>

> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Matthew Wilcox <willy@infradead.org>
> ---
>   mm/migrate.c | 54 ++++++++++++++++++++++++++--------------------------
>   1 file changed, 27 insertions(+), 27 deletions(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index dff333593a8a..f6dd749dd2f8 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1150,79 +1150,79 @@ static int __unmap_and_move(struct folio *src, struct folio *dst,
>   }
>   
>   /*
> - * Obtain the lock on page, remove all ptes and migrate the page
> - * to the newly allocated page in newpage.
> + * Obtain the lock on folio, remove all ptes and migrate the folio
> + * to the newly allocated folio in dst.
>    */
>   static int unmap_and_move(new_page_t get_new_page,
>   				   free_page_t put_new_page,
> -				   unsigned long private, struct page *page,
> +				   unsigned long private, struct folio *src,
>   				   int force, enum migrate_mode mode,
>   				   enum migrate_reason reason,
>   				   struct list_head *ret)
>   {
> -	struct folio *dst, *src = page_folio(page);
> +	struct folio *dst;
>   	int rc = MIGRATEPAGE_SUCCESS;
>   	struct page *newpage = NULL;
>   
> -	if (!thp_migration_supported() && PageTransHuge(page))
> +	if (!thp_migration_supported() && folio_test_transhuge(src))
>   		return -ENOSYS;
>   
> -	if (page_count(page) == 1) {
> -		/* Page was freed from under us. So we are done. */
> -		ClearPageActive(page);
> -		ClearPageUnevictable(page);
> +	if (folio_ref_count(src) == 1) {
> +		/* Folio was freed from under us. So we are done. */
> +		folio_clear_active(src);
> +		folio_clear_unevictable(src);
>   		/* free_pages_prepare() will clear PG_isolated. */
>   		goto out;
>   	}
>   
> -	newpage = get_new_page(page, private);
> +	newpage = get_new_page(&src->page, private);
>   	if (!newpage)
>   		return -ENOMEM;
>   	dst = page_folio(newpage);
>   
> -	newpage->private = 0;
> +	dst->private = 0;
>   	rc = __unmap_and_move(src, dst, force, mode);
>   	if (rc == MIGRATEPAGE_SUCCESS)
> -		set_page_owner_migrate_reason(newpage, reason);
> +		set_page_owner_migrate_reason(&dst->page, reason);
>   
>   out:
>   	if (rc != -EAGAIN) {
>   		/*
> -		 * A page that has been migrated has all references
> -		 * removed and will be freed. A page that has not been
> +		 * A folio that has been migrated has all references
> +		 * removed and will be freed. A folio that has not been
>   		 * migrated will have kept its references and be restored.
>   		 */
> -		list_del(&page->lru);
> +		list_del(&src->lru);
>   	}
>   
>   	/*
>   	 * If migration is successful, releases reference grabbed during
> -	 * isolation. Otherwise, restore the page to right list unless
> +	 * isolation. Otherwise, restore the folio to right list unless
>   	 * we want to retry.
>   	 */
>   	if (rc == MIGRATEPAGE_SUCCESS) {
>   		/*
> -		 * Compaction can migrate also non-LRU pages which are
> +		 * Compaction can migrate also non-LRU folios which are
>   		 * not accounted to NR_ISOLATED_*. They can be recognized
> -		 * as __PageMovable
> +		 * as __folio_test_movable
>   		 */
> -		if (likely(!__PageMovable(page)))
> -			mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON +
> -					page_is_file_lru(page), -thp_nr_pages(page));
> +		if (likely(!__folio_test_movable(src)))
> +			mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON +
> +					folio_is_file_lru(src), -folio_nr_pages(src));
>   
>   		if (reason != MR_MEMORY_FAILURE)
>   			/*
> -			 * We release the page in page_handle_poison.
> +			 * We release the folio in page_handle_poison.
>   			 */
> -			put_page(page);
> +			folio_put(src);
>   	} else {
>   		if (rc != -EAGAIN)
> -			list_add_tail(&page->lru, ret);
> +			list_add_tail(&src->lru, ret);
>   
>   		if (put_new_page)
> -			put_new_page(newpage, private);
> +			put_new_page(&dst->page, private);
>   		else
> -			put_page(newpage);
> +			folio_put(dst);
>   	}
>   
>   	return rc;
> @@ -1459,7 +1459,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>   						&ret_pages);
>   			else
>   				rc = unmap_and_move(get_new_page, put_new_page,
> -						private, page, pass > 2, mode,
> +						private, page_folio(page), pass > 2, mode,
>   						reason, &ret_pages);
>   			/*
>   			 * The rules are:


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2] migrate: convert migrate_pages() to use folios
  2022-11-04  8:30 ` [PATCH 2/2] migrate: convert migrate_pages() " Huang Ying
@ 2022-11-07  8:13   ` Baolin Wang
  2022-11-08  8:24     ` Huang, Ying
  0 siblings, 1 reply; 9+ messages in thread
From: Baolin Wang @ 2022-11-07  8:13 UTC (permalink / raw)
  To: Huang Ying, linux-mm
  Cc: linux-kernel, Andrew Morton, Zi Yan, Yang Shi, Oscar Salvador,
	Matthew Wilcox



On 11/4/2022 4:30 PM, Huang Ying wrote:
> Quite straightforward, the page functions are converted to
> corresponding folio functions.  Same for comments.
> 
> THP specific code are converted to be large folio.
> 
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Matthew Wilcox <willy@infradead.org>
> ---
>   mm/migrate.c | 201 +++++++++++++++++++++++++++------------------------
>   1 file changed, 107 insertions(+), 94 deletions(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index f6dd749dd2f8..b41289ef3b65 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1373,218 +1373,231 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
>   	return rc;
>   }
>   
> -static inline int try_split_thp(struct page *page, struct list_head *split_pages)
> +static inline int try_split_folio(struct folio *folio, struct list_head *split_folios)
>   {
>   	int rc;
>   
> -	lock_page(page);
> -	rc = split_huge_page_to_list(page, split_pages);
> -	unlock_page(page);
> +	folio_lock(folio);
> +	rc = split_folio_to_list(folio, split_folios);
> +	folio_unlock(folio);
>   	if (!rc)
> -		list_move_tail(&page->lru, split_pages);
> +		list_move_tail(&folio->lru, split_folios);
>   
>   	return rc;
>   }
>   
>   /*
> - * migrate_pages - migrate the pages specified in a list, to the free pages
> + * migrate_pages - migrate the folios specified in a list, to the free folios
>    *		   supplied as the target for the page migration
>    *
> - * @from:		The list of pages to be migrated.
> - * @get_new_page:	The function used to allocate free pages to be used
> - *			as the target of the page migration.
> - * @put_new_page:	The function used to free target pages if migration
> + * @from:		The list of folios to be migrated.
> + * @get_new_page:	The function used to allocate free folios to be used
> + *			as the target of the folio migration.
> + * @put_new_page:	The function used to free target folios if migration
>    *			fails, or NULL if no special handling is necessary.
>    * @private:		Private data to be passed on to get_new_page()
>    * @mode:		The migration mode that specifies the constraints for
> - *			page migration, if any.
> - * @reason:		The reason for page migration.
> - * @ret_succeeded:	Set to the number of normal pages migrated successfully if
> + *			folio migration, if any.
> + * @reason:		The reason for folio migration.
> + * @ret_succeeded:	Set to the number of folios migrated successfully if
>    *			the caller passes a non-NULL pointer.
>    *
> - * The function returns after 10 attempts or if no pages are movable any more
> - * because the list has become empty or no retryable pages exist any more.
> - * It is caller's responsibility to call putback_movable_pages() to return pages
> + * The function returns after 10 attempts or if no folios are movable any more
> + * because the list has become empty or no retryable folios exist any more.
> + * It is caller's responsibility to call putback_movable_pages() to return folios
>    * to the LRU or free list only if ret != 0.
>    *
> - * Returns the number of {normal page, THP, hugetlb} that were not migrated, or
> - * an error code. The number of THP splits will be considered as the number of
> - * non-migrated THP, no matter how many subpages of the THP are migrated successfully.
> + * Returns the number of {normal folio, large folio, hugetlb} that were not
> + * migrated, or an error code. The number of large folio splits will be
> + * considered as the number of non-migrated large folio, no matter how many
> + * split folios of the large folio are migrated successfully.
>    */
>   int migrate_pages(struct list_head *from, new_page_t get_new_page,
>   		free_page_t put_new_page, unsigned long private,
>   		enum migrate_mode mode, int reason, unsigned int *ret_succeeded)
>   {
>   	int retry = 1;
> +	int large_retry = 1;
>   	int thp_retry = 1;
>   	int nr_failed = 0;
>   	int nr_failed_pages = 0;
>   	int nr_retry_pages = 0;
>   	int nr_succeeded = 0;
>   	int nr_thp_succeeded = 0;
> +	int nr_large_failed = 0;
>   	int nr_thp_failed = 0;
>   	int nr_thp_split = 0;
>   	int pass = 0;
> +	bool is_large = false;
>   	bool is_thp = false;
> -	struct page *page;
> -	struct page *page2;
> -	int rc, nr_subpages;
> -	LIST_HEAD(ret_pages);
> -	LIST_HEAD(thp_split_pages);
> +	struct folio *folio, *folio2;
> +	int rc, nr_pages;
> +	LIST_HEAD(ret_folios);
> +	LIST_HEAD(split_folios);
>   	bool nosplit = (reason == MR_NUMA_MISPLACED);
> -	bool no_subpage_counting = false;
> +	bool no_split_folio_counting = false;
>   
>   	trace_mm_migrate_pages_start(mode, reason);
>   
> -thp_subpage_migration:
> -	for (pass = 0; pass < 10 && (retry || thp_retry); pass++) {
> +split_folio_migration:
> +	for (pass = 0; pass < 10 && (retry || large_retry); pass++) {
>   		retry = 0;
> +		large_retry = 0;
>   		thp_retry = 0;
>   		nr_retry_pages = 0;
>   
> -		list_for_each_entry_safe(page, page2, from, lru) {
> +		list_for_each_entry_safe(folio, folio2, from, lru) {
>   			/*
> -			 * THP statistics is based on the source huge page.
> -			 * Capture required information that might get lost
> -			 * during migration.
> +			 * large folio statistics is based on the source large

Nit: s/large/Large

> +			 * folio. Capture required information that might get
> +			 * lost during migration.
>   			 */
> -			is_thp = PageTransHuge(page) && !PageHuge(page);
> -			nr_subpages = compound_nr(page);
> +			is_large = folio_test_large(folio) && !folio_test_hugetlb(folio);
> +			is_thp = is_large && folio_test_pmd_mappable(folio);
> +			nr_pages = folio_nr_pages(folio);
>   			cond_resched();
>   
> -			if (PageHuge(page))
> +			if (folio_test_hugetlb(folio))
>   				rc = unmap_and_move_huge_page(get_new_page,
> -						put_new_page, private, page,
> -						pass > 2, mode, reason,
> -						&ret_pages);
> +						put_new_page, private,
> +						&folio->page, pass > 2, mode,
> +						reason,
> +						&ret_folios);
>   			else
>   				rc = unmap_and_move(get_new_page, put_new_page,
> -						private, page_folio(page), pass > 2, mode,
> -						reason, &ret_pages);
> +						private, folio, pass > 2, mode,
> +						reason, &ret_folios);
>   			/*
>   			 * The rules are:
> -			 *	Success: non hugetlb page will be freed, hugetlb
> -			 *		 page will be put back
> +			 *	Success: non hugetlb folio will be freed, hugetlb
> +			 *		 folio will be put back
>   			 *	-EAGAIN: stay on the from list
>   			 *	-ENOMEM: stay on the from list
>   			 *	-ENOSYS: stay on the from list
> -			 *	Other errno: put on ret_pages list then splice to
> +			 *	Other errno: put on ret_folios list then splice to
>   			 *		     from list
>   			 */
>   			switch(rc) {
>   			/*
> -			 * THP migration might be unsupported or the
> -			 * allocation could've failed so we should
> -			 * retry on the same page with the THP split
> -			 * to base pages.
> +			 * Large folio migration might be unsupported or
> +			 * the allocation could've failed so we should retry
> +			 * on the same folio with the large folio split
> +			 * to normal folios.
>   			 *
> -			 * Sub-pages are put in thp_split_pages, and
> +			 * Split folios are put in split_folios, and
>   			 * we will migrate them after the rest of the
>   			 * list is processed.
>   			 */
>   			case -ENOSYS:
> -				/* THP migration is unsupported */
> -				if (is_thp) {
> -					nr_thp_failed++;
> -					if (!try_split_thp(page, &thp_split_pages)) {
> -						nr_thp_split++;
> +				/* Large folio migration is unsupported */
> +				if (is_large) {
> +					nr_large_failed++;
> +					nr_thp_failed += is_thp;
> +					if (!try_split_folio(folio, &split_folios)) {
> +						nr_thp_split += is_thp;
>   						break;
>   					}
>   				/* Hugetlb migration is unsupported */
> -				} else if (!no_subpage_counting) {
> +				} else if (!no_split_folio_counting) {
>   					nr_failed++;
>   				}
>   
> -				nr_failed_pages += nr_subpages;
> -				list_move_tail(&page->lru, &ret_pages);
> +				nr_failed_pages += nr_pages;
> +				list_move_tail(&folio->lru, &ret_folios);
>   				break;
>   			case -ENOMEM:
>   				/*
>   				 * When memory is low, don't bother to try to migrate
> -				 * other pages, just exit.
> +				 * other folios, just exit.
>   				 */
> -				if (is_thp) {
> -					nr_thp_failed++;
> -					/* THP NUMA faulting doesn't split THP to retry. */
> -					if (!nosplit && !try_split_thp(page, &thp_split_pages)) {
> -						nr_thp_split++;
> +				if (is_large) {
> +					nr_large_failed++;
> +					nr_thp_failed += is_thp;
> +					/* Large folio NUMA faulting doesn't split to retry. */
> +					if (!nosplit && !try_split_folio(folio, &split_folios)) {

I am not sure if need to add a is_thp validation before calling 
try_split_folio()?

BTW, you should rebase on the mm-unstable branch, since I've added a 
retry for THP split.

> +						nr_thp_split += is_thp;
>   						break;
>   					}
> -				} else if (!no_subpage_counting) {
> +				} else if (!no_split_folio_counting) {
>   					nr_failed++;
>   				}
>   
> -				nr_failed_pages += nr_subpages + nr_retry_pages;
> +				nr_failed_pages += nr_pages + nr_retry_pages;
>   				/*
> -				 * There might be some subpages of fail-to-migrate THPs
> -				 * left in thp_split_pages list. Move them back to migration
> +				 * There might be some split folios of fail-to-migrate large
> +				 * folios left in split_folios list. Move them back to migration
>   				 * list so that they could be put back to the right list by
> -				 * the caller otherwise the page refcnt will be leaked.
> +				 * the caller otherwise the folio refcnt will be leaked.
>   				 */
> -				list_splice_init(&thp_split_pages, from);
> +				list_splice_init(&split_folios, from);
>   				/* nr_failed isn't updated for not used */
> +				nr_large_failed += large_retry;
>   				nr_thp_failed += thp_retry;
>   				goto out;
>   			case -EAGAIN:
> -				if (is_thp)
> -					thp_retry++;
> -				else if (!no_subpage_counting)
> +				if (is_large) {
> +					large_retry++;
> +					thp_retry += is_thp;
> +				} else if (!no_split_folio_counting) {
>   					retry++;
> -				nr_retry_pages += nr_subpages;
> +				}
> +				nr_retry_pages += nr_pages;
>   				break;
>   			case MIGRATEPAGE_SUCCESS:
> -				nr_succeeded += nr_subpages;
> -				if (is_thp)
> -					nr_thp_succeeded++;
> +				nr_succeeded += nr_pages;
> +				nr_thp_succeeded += is_thp;
>   				break;
>   			default:
>   				/*
>   				 * Permanent failure (-EBUSY, etc.):
> -				 * unlike -EAGAIN case, the failed page is
> -				 * removed from migration page list and not
> +				 * unlike -EAGAIN case, the failed folio is
> +				 * removed from migration folio list and not
>   				 * retried in the next outer loop.
>   				 */
> -				if (is_thp)
> -					nr_thp_failed++;
> -				else if (!no_subpage_counting)
> +				if (is_large) {
> +					nr_large_failed++;
> +					nr_thp_failed += is_thp;
> +				} else if (!no_split_folio_counting) {
>   					nr_failed++;
> +				}
>   
> -				nr_failed_pages += nr_subpages;
> +				nr_failed_pages += nr_pages;
>   				break;
>   			}
>   		}
>   	}
>   	nr_failed += retry;
> +	nr_large_failed += large_retry;
>   	nr_thp_failed += thp_retry;
>   	nr_failed_pages += nr_retry_pages;
>   	/*
> -	 * Try to migrate subpages of fail-to-migrate THPs, no nr_failed
> -	 * counting in this round, since all subpages of a THP is counted
> -	 * as 1 failure in the first round.
> +	 * Try to migrate split folios of fail-to-migrate large folios, no
> +	 * nr_failed counting in this round, since all split folios of a
> +	 * large folio is counted as 1 failure in the first round.
>   	 */
> -	if (!list_empty(&thp_split_pages)) {
> +	if (!list_empty(&split_folios)) {
>   		/*
> -		 * Move non-migrated pages (after 10 retries) to ret_pages
> +		 * Move non-migrated folios (after 10 retries) to ret_folios
>   		 * to avoid migrating them again.
>   		 */
> -		list_splice_init(from, &ret_pages);
> -		list_splice_init(&thp_split_pages, from);
> -		no_subpage_counting = true;
> +		list_splice_init(from, &ret_folios);
> +		list_splice_init(&split_folios, from);
> +		no_split_folio_counting = true;
>   		retry = 1;
> -		goto thp_subpage_migration;
> +		goto split_folio_migration;
>   	}
>   
> -	rc = nr_failed + nr_thp_failed;
> +	rc = nr_failed + nr_large_failed;
>   out:
>   	/*
> -	 * Put the permanent failure page back to migration list, they
> +	 * Put the permanent failure folio back to migration list, they
>   	 * will be put back to the right list by the caller.
>   	 */
> -	list_splice(&ret_pages, from);
> +	list_splice(&ret_folios, from);
>   
>   	/*
> -	 * Return 0 in case all subpages of fail-to-migrate THPs are
> -	 * migrated successfully.
> +	 * Return 0 in case all split folios of fail-to-migrate large folios
> +	 * are migrated successfully.
>   	 */
>   	if (list_empty(from))
>   		rc = 0;


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] migrate: convert unmap_and_move() to use folios
  2022-11-04  8:30 ` [PATCH 1/2] migrate: convert unmap_and_move() " Huang Ying
  2022-11-07  7:26   ` Baolin Wang
@ 2022-11-07 13:49   ` Matthew Wilcox
  2022-11-07 15:29   ` Zi Yan
  2022-11-07 18:49   ` Yang Shi
  3 siblings, 0 replies; 9+ messages in thread
From: Matthew Wilcox @ 2022-11-07 13:49 UTC (permalink / raw)
  To: Huang Ying
  Cc: linux-mm, linux-kernel, Andrew Morton, Zi Yan, Yang Shi,
	Baolin Wang, Oscar Salvador

On Fri, Nov 04, 2022 at 04:30:19PM +0800, Huang Ying wrote:
> Quite straightforward, the page functions are converted to
> corresponding folio functions.  Same for comments.

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] migrate: convert unmap_and_move() to use folios
  2022-11-04  8:30 ` [PATCH 1/2] migrate: convert unmap_and_move() " Huang Ying
  2022-11-07  7:26   ` Baolin Wang
  2022-11-07 13:49   ` Matthew Wilcox
@ 2022-11-07 15:29   ` Zi Yan
  2022-11-07 18:49   ` Yang Shi
  3 siblings, 0 replies; 9+ messages in thread
From: Zi Yan @ 2022-11-07 15:29 UTC (permalink / raw)
  To: Huang Ying
  Cc: linux-mm, linux-kernel, Andrew Morton, Yang Shi, Baolin Wang,
	Oscar Salvador, Matthew Wilcox

[-- Attachment #1: Type: text/plain, Size: 674 bytes --]

On 4 Nov 2022, at 4:30, Huang Ying wrote:

> Quite straightforward, the page functions are converted to
> corresponding folio functions.  Same for comments.
>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Matthew Wilcox <willy@infradead.org>
> ---
>  mm/migrate.c | 54 ++++++++++++++++++++++++++--------------------------
>  1 file changed, 27 insertions(+), 27 deletions(-)
>
LGTM. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com>

--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] migrate: convert unmap_and_move() to use folios
  2022-11-04  8:30 ` [PATCH 1/2] migrate: convert unmap_and_move() " Huang Ying
                     ` (2 preceding siblings ...)
  2022-11-07 15:29   ` Zi Yan
@ 2022-11-07 18:49   ` Yang Shi
  3 siblings, 0 replies; 9+ messages in thread
From: Yang Shi @ 2022-11-07 18:49 UTC (permalink / raw)
  To: Huang Ying
  Cc: linux-mm, linux-kernel, Andrew Morton, Zi Yan, Baolin Wang,
	Oscar Salvador, Matthew Wilcox

On Fri, Nov 4, 2022 at 1:31 AM Huang Ying <ying.huang@intel.com> wrote:
>
> Quite straightforward, the page functions are converted to
> corresponding folio functions.  Same for comments.

Reviewed-by: Yang Shi <shy828301@gmail.com>

>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Matthew Wilcox <willy@infradead.org>
> ---
>  mm/migrate.c | 54 ++++++++++++++++++++++++++--------------------------
>  1 file changed, 27 insertions(+), 27 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index dff333593a8a..f6dd749dd2f8 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1150,79 +1150,79 @@ static int __unmap_and_move(struct folio *src, struct folio *dst,
>  }
>
>  /*
> - * Obtain the lock on page, remove all ptes and migrate the page
> - * to the newly allocated page in newpage.
> + * Obtain the lock on folio, remove all ptes and migrate the folio
> + * to the newly allocated folio in dst.
>   */
>  static int unmap_and_move(new_page_t get_new_page,
>                                    free_page_t put_new_page,
> -                                  unsigned long private, struct page *page,
> +                                  unsigned long private, struct folio *src,
>                                    int force, enum migrate_mode mode,
>                                    enum migrate_reason reason,
>                                    struct list_head *ret)
>  {
> -       struct folio *dst, *src = page_folio(page);
> +       struct folio *dst;
>         int rc = MIGRATEPAGE_SUCCESS;
>         struct page *newpage = NULL;
>
> -       if (!thp_migration_supported() && PageTransHuge(page))
> +       if (!thp_migration_supported() && folio_test_transhuge(src))
>                 return -ENOSYS;
>
> -       if (page_count(page) == 1) {
> -               /* Page was freed from under us. So we are done. */
> -               ClearPageActive(page);
> -               ClearPageUnevictable(page);
> +       if (folio_ref_count(src) == 1) {
> +               /* Folio was freed from under us. So we are done. */
> +               folio_clear_active(src);
> +               folio_clear_unevictable(src);
>                 /* free_pages_prepare() will clear PG_isolated. */
>                 goto out;
>         }
>
> -       newpage = get_new_page(page, private);
> +       newpage = get_new_page(&src->page, private);
>         if (!newpage)
>                 return -ENOMEM;
>         dst = page_folio(newpage);
>
> -       newpage->private = 0;
> +       dst->private = 0;
>         rc = __unmap_and_move(src, dst, force, mode);
>         if (rc == MIGRATEPAGE_SUCCESS)
> -               set_page_owner_migrate_reason(newpage, reason);
> +               set_page_owner_migrate_reason(&dst->page, reason);
>
>  out:
>         if (rc != -EAGAIN) {
>                 /*
> -                * A page that has been migrated has all references
> -                * removed and will be freed. A page that has not been
> +                * A folio that has been migrated has all references
> +                * removed and will be freed. A folio that has not been
>                  * migrated will have kept its references and be restored.
>                  */
> -               list_del(&page->lru);
> +               list_del(&src->lru);
>         }
>
>         /*
>          * If migration is successful, releases reference grabbed during
> -        * isolation. Otherwise, restore the page to right list unless
> +        * isolation. Otherwise, restore the folio to right list unless
>          * we want to retry.
>          */
>         if (rc == MIGRATEPAGE_SUCCESS) {
>                 /*
> -                * Compaction can migrate also non-LRU pages which are
> +                * Compaction can migrate also non-LRU folios which are
>                  * not accounted to NR_ISOLATED_*. They can be recognized
> -                * as __PageMovable
> +                * as __folio_test_movable
>                  */
> -               if (likely(!__PageMovable(page)))
> -                       mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON +
> -                                       page_is_file_lru(page), -thp_nr_pages(page));
> +               if (likely(!__folio_test_movable(src)))
> +                       mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON +
> +                                       folio_is_file_lru(src), -folio_nr_pages(src));
>
>                 if (reason != MR_MEMORY_FAILURE)
>                         /*
> -                        * We release the page in page_handle_poison.
> +                        * We release the folio in page_handle_poison.
>                          */
> -                       put_page(page);
> +                       folio_put(src);
>         } else {
>                 if (rc != -EAGAIN)
> -                       list_add_tail(&page->lru, ret);
> +                       list_add_tail(&src->lru, ret);
>
>                 if (put_new_page)
> -                       put_new_page(newpage, private);
> +                       put_new_page(&dst->page, private);
>                 else
> -                       put_page(newpage);
> +                       folio_put(dst);
>         }
>
>         return rc;
> @@ -1459,7 +1459,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>                                                 &ret_pages);
>                         else
>                                 rc = unmap_and_move(get_new_page, put_new_page,
> -                                               private, page, pass > 2, mode,
> +                                               private, page_folio(page), pass > 2, mode,
>                                                 reason, &ret_pages);
>                         /*
>                          * The rules are:
> --
> 2.35.1
>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2] migrate: convert migrate_pages() to use folios
  2022-11-07  8:13   ` Baolin Wang
@ 2022-11-08  8:24     ` Huang, Ying
  0 siblings, 0 replies; 9+ messages in thread
From: Huang, Ying @ 2022-11-08  8:24 UTC (permalink / raw)
  To: Baolin Wang
  Cc: linux-mm, linux-kernel, Andrew Morton, Zi Yan, Yang Shi,
	Oscar Salvador, Matthew Wilcox

Baolin Wang <baolin.wang@linux.alibaba.com> writes:

> On 11/4/2022 4:30 PM, Huang Ying wrote:
>> Quite straightforward, the page functions are converted to
>> corresponding folio functions.  Same for comments.
>> THP specific code are converted to be large folio.
>> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Yang Shi <shy828301@gmail.com>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> ---
>>   mm/migrate.c | 201 +++++++++++++++++++++++++++------------------------
>>   1 file changed, 107 insertions(+), 94 deletions(-)
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index f6dd749dd2f8..b41289ef3b65 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -1373,218 +1373,231 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
>>   	return rc;
>>   }
>>   -static inline int try_split_thp(struct page *page, struct
>> list_head *split_pages)
>> +static inline int try_split_folio(struct folio *folio, struct list_head *split_folios)
>>   {
>>   	int rc;
>>   -	lock_page(page);
>> -	rc = split_huge_page_to_list(page, split_pages);
>> -	unlock_page(page);
>> +	folio_lock(folio);
>> +	rc = split_folio_to_list(folio, split_folios);
>> +	folio_unlock(folio);
>>   	if (!rc)
>> -		list_move_tail(&page->lru, split_pages);
>> +		list_move_tail(&folio->lru, split_folios);
>>     	return rc;
>>   }
>>     /*
>> - * migrate_pages - migrate the pages specified in a list, to the free pages
>> + * migrate_pages - migrate the folios specified in a list, to the free folios
>>    *		   supplied as the target for the page migration
>>    *
>> - * @from:		The list of pages to be migrated.
>> - * @get_new_page:	The function used to allocate free pages to be used
>> - *			as the target of the page migration.
>> - * @put_new_page:	The function used to free target pages if migration
>> + * @from:		The list of folios to be migrated.
>> + * @get_new_page:	The function used to allocate free folios to be used
>> + *			as the target of the folio migration.
>> + * @put_new_page:	The function used to free target folios if migration
>>    *			fails, or NULL if no special handling is necessary.
>>    * @private:		Private data to be passed on to get_new_page()
>>    * @mode:		The migration mode that specifies the constraints for
>> - *			page migration, if any.
>> - * @reason:		The reason for page migration.
>> - * @ret_succeeded:	Set to the number of normal pages migrated successfully if
>> + *			folio migration, if any.
>> + * @reason:		The reason for folio migration.
>> + * @ret_succeeded:	Set to the number of folios migrated successfully if
>>    *			the caller passes a non-NULL pointer.
>>    *
>> - * The function returns after 10 attempts or if no pages are movable any more
>> - * because the list has become empty or no retryable pages exist any more.
>> - * It is caller's responsibility to call putback_movable_pages() to return pages
>> + * The function returns after 10 attempts or if no folios are movable any more
>> + * because the list has become empty or no retryable folios exist any more.
>> + * It is caller's responsibility to call putback_movable_pages() to return folios
>>    * to the LRU or free list only if ret != 0.
>>    *
>> - * Returns the number of {normal page, THP, hugetlb} that were not migrated, or
>> - * an error code. The number of THP splits will be considered as the number of
>> - * non-migrated THP, no matter how many subpages of the THP are migrated successfully.
>> + * Returns the number of {normal folio, large folio, hugetlb} that were not
>> + * migrated, or an error code. The number of large folio splits will be
>> + * considered as the number of non-migrated large folio, no matter how many
>> + * split folios of the large folio are migrated successfully.
>>    */
>>   int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>   		free_page_t put_new_page, unsigned long private,
>>   		enum migrate_mode mode, int reason, unsigned int *ret_succeeded)
>>   {
>>   	int retry = 1;
>> +	int large_retry = 1;
>>   	int thp_retry = 1;
>>   	int nr_failed = 0;
>>   	int nr_failed_pages = 0;
>>   	int nr_retry_pages = 0;
>>   	int nr_succeeded = 0;
>>   	int nr_thp_succeeded = 0;
>> +	int nr_large_failed = 0;
>>   	int nr_thp_failed = 0;
>>   	int nr_thp_split = 0;
>>   	int pass = 0;
>> +	bool is_large = false;
>>   	bool is_thp = false;
>> -	struct page *page;
>> -	struct page *page2;
>> -	int rc, nr_subpages;
>> -	LIST_HEAD(ret_pages);
>> -	LIST_HEAD(thp_split_pages);
>> +	struct folio *folio, *folio2;
>> +	int rc, nr_pages;
>> +	LIST_HEAD(ret_folios);
>> +	LIST_HEAD(split_folios);
>>   	bool nosplit = (reason == MR_NUMA_MISPLACED);
>> -	bool no_subpage_counting = false;
>> +	bool no_split_folio_counting = false;
>>     	trace_mm_migrate_pages_start(mode, reason);
>>   -thp_subpage_migration:
>> -	for (pass = 0; pass < 10 && (retry || thp_retry); pass++) {
>> +split_folio_migration:
>> +	for (pass = 0; pass < 10 && (retry || large_retry); pass++) {
>>   		retry = 0;
>> +		large_retry = 0;
>>   		thp_retry = 0;
>>   		nr_retry_pages = 0;
>>   -		list_for_each_entry_safe(page, page2, from, lru) {
>> +		list_for_each_entry_safe(folio, folio2, from, lru) {
>>   			/*
>> -			 * THP statistics is based on the source huge page.
>> -			 * Capture required information that might get lost
>> -			 * during migration.
>> +			 * large folio statistics is based on the source large
>
> Nit: s/large/Large

Thanks.  Will change this.

>> +			 * folio. Capture required information that might get
>> +			 * lost during migration.
>>   			 */
>> -			is_thp = PageTransHuge(page) && !PageHuge(page);
>> -			nr_subpages = compound_nr(page);
>> +			is_large = folio_test_large(folio) && !folio_test_hugetlb(folio);
>> +			is_thp = is_large && folio_test_pmd_mappable(folio);
>> +			nr_pages = folio_nr_pages(folio);
>>   			cond_resched();
>>   -			if (PageHuge(page))
>> +			if (folio_test_hugetlb(folio))
>>   				rc = unmap_and_move_huge_page(get_new_page,
>> -						put_new_page, private, page,
>> -						pass > 2, mode, reason,
>> -						&ret_pages);
>> +						put_new_page, private,
>> +						&folio->page, pass > 2, mode,
>> +						reason,
>> +						&ret_folios);
>>   			else
>>   				rc = unmap_and_move(get_new_page, put_new_page,
>> -						private, page_folio(page), pass > 2, mode,
>> -						reason, &ret_pages);
>> +						private, folio, pass > 2, mode,
>> +						reason, &ret_folios);
>>   			/*
>>   			 * The rules are:
>> -			 *	Success: non hugetlb page will be freed, hugetlb
>> -			 *		 page will be put back
>> +			 *	Success: non hugetlb folio will be freed, hugetlb
>> +			 *		 folio will be put back
>>   			 *	-EAGAIN: stay on the from list
>>   			 *	-ENOMEM: stay on the from list
>>   			 *	-ENOSYS: stay on the from list
>> -			 *	Other errno: put on ret_pages list then splice to
>> +			 *	Other errno: put on ret_folios list then splice to
>>   			 *		     from list
>>   			 */
>>   			switch(rc) {
>>   			/*
>> -			 * THP migration might be unsupported or the
>> -			 * allocation could've failed so we should
>> -			 * retry on the same page with the THP split
>> -			 * to base pages.
>> +			 * Large folio migration might be unsupported or
>> +			 * the allocation could've failed so we should retry
>> +			 * on the same folio with the large folio split
>> +			 * to normal folios.
>>   			 *
>> -			 * Sub-pages are put in thp_split_pages, and
>> +			 * Split folios are put in split_folios, and
>>   			 * we will migrate them after the rest of the
>>   			 * list is processed.
>>   			 */
>>   			case -ENOSYS:
>> -				/* THP migration is unsupported */
>> -				if (is_thp) {
>> -					nr_thp_failed++;
>> -					if (!try_split_thp(page, &thp_split_pages)) {
>> -						nr_thp_split++;
>> +				/* Large folio migration is unsupported */
>> +				if (is_large) {
>> +					nr_large_failed++;
>> +					nr_thp_failed += is_thp;
>> +					if (!try_split_folio(folio, &split_folios)) {
>> +						nr_thp_split += is_thp;
>>   						break;
>>   					}
>>   				/* Hugetlb migration is unsupported */
>> -				} else if (!no_subpage_counting) {
>> +				} else if (!no_split_folio_counting) {
>>   					nr_failed++;
>>   				}
>>   -				nr_failed_pages += nr_subpages;
>> -				list_move_tail(&page->lru, &ret_pages);
>> +				nr_failed_pages += nr_pages;
>> +				list_move_tail(&folio->lru, &ret_folios);
>>   				break;
>>   			case -ENOMEM:
>>   				/*
>>   				 * When memory is low, don't bother to try to migrate
>> -				 * other pages, just exit.
>> +				 * other folios, just exit.
>>   				 */
>> -				if (is_thp) {
>> -					nr_thp_failed++;
>> -					/* THP NUMA faulting doesn't split THP to retry. */
>> -					if (!nosplit && !try_split_thp(page, &thp_split_pages)) {
>> -						nr_thp_split++;
>> +				if (is_large) {
>> +					nr_large_failed++;
>> +					nr_thp_failed += is_thp;
>> +					/* Large folio NUMA faulting doesn't split to retry. */
>> +					if (!nosplit && !try_split_folio(folio, &split_folios)) {
>
> I am not sure if need to add a is_thp validation before calling
> try_split_folio()?

IIUC try_split_folio() can deal with large folio with arbitrary order now.

> BTW, you should rebase on the mm-unstable branch, since I've added a
> retry for THP split.

Yes.  Andrew remind me too.

Best Regards,
Huang, Ying

>> +						nr_thp_split += is_thp;
>>   						break;
>>   					}
>> -				} else if (!no_subpage_counting) {
>> +				} else if (!no_split_folio_counting) {
>>   					nr_failed++;
>>   				}
>>   -				nr_failed_pages += nr_subpages +
>> nr_retry_pages;
>> +				nr_failed_pages += nr_pages + nr_retry_pages;
>>   				/*
>> -				 * There might be some subpages of fail-to-migrate THPs
>> -				 * left in thp_split_pages list. Move them back to migration
>> +				 * There might be some split folios of fail-to-migrate large
>> +				 * folios left in split_folios list. Move them back to migration
>>   				 * list so that they could be put back to the right list by
>> -				 * the caller otherwise the page refcnt will be leaked.
>> +				 * the caller otherwise the folio refcnt will be leaked.
>>   				 */
>> -				list_splice_init(&thp_split_pages, from);
>> +				list_splice_init(&split_folios, from);
>>   				/* nr_failed isn't updated for not used */
>> +				nr_large_failed += large_retry;
>>   				nr_thp_failed += thp_retry;
>>   				goto out;
>>   			case -EAGAIN:
>> -				if (is_thp)
>> -					thp_retry++;
>> -				else if (!no_subpage_counting)
>> +				if (is_large) {
>> +					large_retry++;
>> +					thp_retry += is_thp;
>> +				} else if (!no_split_folio_counting) {
>>   					retry++;
>> -				nr_retry_pages += nr_subpages;
>> +				}
>> +				nr_retry_pages += nr_pages;
>>   				break;
>>   			case MIGRATEPAGE_SUCCESS:
>> -				nr_succeeded += nr_subpages;
>> -				if (is_thp)
>> -					nr_thp_succeeded++;
>> +				nr_succeeded += nr_pages;
>> +				nr_thp_succeeded += is_thp;
>>   				break;
>>   			default:
>>   				/*
>>   				 * Permanent failure (-EBUSY, etc.):
>> -				 * unlike -EAGAIN case, the failed page is
>> -				 * removed from migration page list and not
>> +				 * unlike -EAGAIN case, the failed folio is
>> +				 * removed from migration folio list and not
>>   				 * retried in the next outer loop.
>>   				 */
>> -				if (is_thp)
>> -					nr_thp_failed++;
>> -				else if (!no_subpage_counting)
>> +				if (is_large) {
>> +					nr_large_failed++;
>> +					nr_thp_failed += is_thp;
>> +				} else if (!no_split_folio_counting) {
>>   					nr_failed++;
>> +				}
>>   -				nr_failed_pages += nr_subpages;
>> +				nr_failed_pages += nr_pages;
>>   				break;
>>   			}
>>   		}
>>   	}
>>   	nr_failed += retry;
>> +	nr_large_failed += large_retry;
>>   	nr_thp_failed += thp_retry;
>>   	nr_failed_pages += nr_retry_pages;
>>   	/*
>> -	 * Try to migrate subpages of fail-to-migrate THPs, no nr_failed
>> -	 * counting in this round, since all subpages of a THP is counted
>> -	 * as 1 failure in the first round.
>> +	 * Try to migrate split folios of fail-to-migrate large folios, no
>> +	 * nr_failed counting in this round, since all split folios of a
>> +	 * large folio is counted as 1 failure in the first round.
>>   	 */
>> -	if (!list_empty(&thp_split_pages)) {
>> +	if (!list_empty(&split_folios)) {
>>   		/*
>> -		 * Move non-migrated pages (after 10 retries) to ret_pages
>> +		 * Move non-migrated folios (after 10 retries) to ret_folios
>>   		 * to avoid migrating them again.
>>   		 */
>> -		list_splice_init(from, &ret_pages);
>> -		list_splice_init(&thp_split_pages, from);
>> -		no_subpage_counting = true;
>> +		list_splice_init(from, &ret_folios);
>> +		list_splice_init(&split_folios, from);
>> +		no_split_folio_counting = true;
>>   		retry = 1;
>> -		goto thp_subpage_migration;
>> +		goto split_folio_migration;
>>   	}
>>   -	rc = nr_failed + nr_thp_failed;
>> +	rc = nr_failed + nr_large_failed;
>>   out:
>>   	/*
>> -	 * Put the permanent failure page back to migration list, they
>> +	 * Put the permanent failure folio back to migration list, they
>>   	 * will be put back to the right list by the caller.
>>   	 */
>> -	list_splice(&ret_pages, from);
>> +	list_splice(&ret_folios, from);
>>     	/*
>> -	 * Return 0 in case all subpages of fail-to-migrate THPs are
>> -	 * migrated successfully.
>> +	 * Return 0 in case all split folios of fail-to-migrate large folios
>> +	 * are migrated successfully.
>>   	 */
>>   	if (list_empty(from))
>>   		rc = 0;


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-11-08  8:25 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-04  8:30 [PATCH 0/2] migrate: convert migrate_pages()/unmap_and_move() to use folios Huang Ying
2022-11-04  8:30 ` [PATCH 1/2] migrate: convert unmap_and_move() " Huang Ying
2022-11-07  7:26   ` Baolin Wang
2022-11-07 13:49   ` Matthew Wilcox
2022-11-07 15:29   ` Zi Yan
2022-11-07 18:49   ` Yang Shi
2022-11-04  8:30 ` [PATCH 2/2] migrate: convert migrate_pages() " Huang Ying
2022-11-07  8:13   ` Baolin Wang
2022-11-08  8:24     ` Huang, Ying

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).