All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH 7/8] mm/compaction: remove unnecessary return for void function
  2023-07-28 17:10 ` [PATCH 7/8] mm/compaction: remove unnecessary return for void function Kemeng Shi
@ 2023-07-28 10:30   ` David Hildenbrand
  2023-08-01  2:53   ` Baolin Wang
  1 sibling, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-07-28 10:30 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, baolin.wang, mgorman, willy

On 28.07.23 19:10, Kemeng Shi wrote:
> Remove unnecessary return for void function
> 
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>   mm/compaction.c | 4 +---
>   1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 6052cb519de1..188d610eb3b6 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1420,8 +1420,6 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn)
>   	/* Skip this pageblock in the future as it's full or nearly full */
>   	if (start_pfn == end_pfn)
>   		set_pageblock_skip(page);
> -
> -	return;
>   }
>   
>   /* Search orders in round-robin fashion */
> @@ -2863,7 +2861,7 @@ int compaction_register_node(struct node *node)
>   
>   void compaction_unregister_node(struct node *node)
>   {
> -	return device_remove_file(&node->dev, &dev_attr_compact);
> +	device_remove_file(&node->dev, &dev_attr_compact);
>   }
>   #endif /* CONFIG_SYSFS && CONFIG_NUMA */
>   

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 8/8] mm/compaction: only set skip flag if cc->no_set_skip_hint is false
  2023-07-28 17:10 ` [PATCH 8/8] mm/compaction: only set skip flag if cc->no_set_skip_hint is false Kemeng Shi
@ 2023-07-28 10:33   ` David Hildenbrand
  2023-08-01  3:02   ` Baolin Wang
  1 sibling, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-07-28 10:33 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, baolin.wang, mgorman, willy

On 28.07.23 19:10, Kemeng Shi wrote:
> Keep the same logic as update_pageblock_skip, only set skip if
> no_set_skip_hint is false which is more reasonable.
> 
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>   mm/compaction.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 188d610eb3b6..6841c0496223 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1418,7 +1418,7 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn)
>   	isolate_freepages_block(cc, &start_pfn, end_pfn, &cc->freepages, 1, false);
>   
>   	/* Skip this pageblock in the future as it's full or nearly full */
> -	if (start_pfn == end_pfn)
> +	if (start_pfn == end_pfn && !cc->no_set_skip_hint)
>   		set_pageblock_skip(page);
>   }
>   

LGTM, although I am not an expert on that code.

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 5/8] mm/compaction: corret comment of cached migrate pfn update
  2023-07-28 17:10 ` [PATCH 5/8] mm/compaction: corret comment of cached migrate pfn update Kemeng Shi
@ 2023-07-28 10:35   ` David Hildenbrand
  2023-08-01  2:45   ` Baolin Wang
  1 sibling, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-07-28 10:35 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, baolin.wang, mgorman, willy

On 28.07.23 19:10, Kemeng Shi wrote:
> Commit e380bebe47715 ("mm, compaction: keep migration source private to
> a single compaction instance") moved update of async and sync
> compact_cached_migrate_pfn from update_pageblock_skip to
> update_cached_migrate but left the comment behind.
> Move the relevant comment to correct this.
> 
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>   mm/compaction.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 09c36251c613..1eebb61a1f63 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -464,6 +464,7 @@ static void update_cached_migrate(struct compact_control *cc, unsigned long pfn)
>   
>   	pfn = pageblock_end_pfn(pfn);
>   
> +	/* Update where async and sync compaction should restart */
>   	if (pfn > zone->compact_cached_migrate_pfn[0])
>   		zone->compact_cached_migrate_pfn[0] = pfn;
>   	if (cc->mode != MIGRATE_ASYNC &&
> @@ -485,7 +486,6 @@ static void update_pageblock_skip(struct compact_control *cc,
>   
>   	set_pageblock_skip(page);
>   
> -	/* Update where async and sync compaction should restart */
>   	if (pfn < zone->compact_cached_free_pfn)
>   		zone->compact_cached_free_pfn = pfn;
>   }

Again, no expert, but LGTM

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-07-28 17:10 ` [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections Kemeng Shi
@ 2023-07-28 10:41   ` David Hildenbrand
  2023-07-29  2:23     ` Kemeng Shi
  2023-07-31 12:01   ` Baolin Wang
  1 sibling, 1 reply; 36+ messages in thread
From: David Hildenbrand @ 2023-07-28 10:41 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, baolin.wang, mgorman, willy

On 28.07.23 19:10, Kemeng Shi wrote:
> skip_offline_sections_reverse will return the last pfn in found online
> section. Then we set block_start_pfn to start of page block which
> contains the last pfn in section. Then we continue, move one page
> block forward and ignore the last page block in the online section.
> Make block_start_pfn point to first page block after online section to fix
> this:
> 1. make skip_offline_sections_reverse return end pfn of online section,
> i.e. pfn of page block after online section.
> 2. assign block_start_pfn with next_pfn.
> 
> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>   mm/compaction.c | 5 ++---
>   1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 9b7a0a69e19f..ce7841363b12 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c

Can we add a short comment which kind of PFN we return (first pfn of 
first offline section after an online section)?

> @@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>   
>   	while (start_nr-- > 0) {
>   		if (online_section_nr(start_nr))
> -			return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
> +			return section_nr_to_pfn(start_nr + 1);
>   	}
>   
>   	return 0;
> @@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
>   
>   			next_pfn = skip_offline_sections_reverse(block_start_pfn);
>   			if (next_pfn)
> -				block_start_pfn = max(pageblock_start_pfn(next_pfn),
> -						      low_pfn);
> +				block_start_pfn = max(next_pfn, low_pfn);


So block_start_pfn() will now point at the first PFN of the offline section.

Confusing stuff, but I get the idea and I think it makes sense to me :)

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 0/8] Fixes and cleanups to compaction
@ 2023-07-28 17:10 Kemeng Shi
  2023-07-28 17:10 ` [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections Kemeng Shi
                   ` (7 more replies)
  0 siblings, 8 replies; 36+ messages in thread
From: Kemeng Shi @ 2023-07-28 17:10 UTC (permalink / raw)
  To: akpm, linux-mm, linux-kernel, baolin.wang, mgorman, willy, david
  Cc: shikemeng

Hi all, this series contains some random cleanups and fixes to
compation. Details can be found in respective patches.
This patchset is base on another cleanups to lock in compaction
at [1]. Thanks!

[1] https://lore.kernel.org/all/20230725180456.2146626-1-shikemeng@huaweicloud.com/

Kemeng Shi (8):
  mm/compaction: avoid missing last page block in section after skip
    offline sections
  mm/compaction: correct last_migrated_pfn update in compact_zone
  mm/compaction: skip page block marked skip in
    isolate_migratepages_block
  mm/compaction: remove stale fast_find_block flag in
    isolate_migratepages
  mm/compaction: corret comment of cached migrate pfn update
  mm/compaction: correct comment to complete migration failure
  mm/compaction: remove unnecessary return for void function
  mm/compaction: only set skip flag if cc->no_set_skip_hint is false

 mm/compaction.c | 31 ++++++++++---------------------
 1 file changed, 10 insertions(+), 21 deletions(-)

-- 
2.30.0


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-07-28 17:10 [PATCH 0/8] Fixes and cleanups to compaction Kemeng Shi
@ 2023-07-28 17:10 ` Kemeng Shi
  2023-07-28 10:41   ` David Hildenbrand
  2023-07-31 12:01   ` Baolin Wang
  2023-07-28 17:10 ` [PATCH 2/8] mm/compaction: correct last_migrated_pfn update in compact_zone Kemeng Shi
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 36+ messages in thread
From: Kemeng Shi @ 2023-07-28 17:10 UTC (permalink / raw)
  To: akpm, linux-mm, linux-kernel, baolin.wang, mgorman, willy, david
  Cc: shikemeng

skip_offline_sections_reverse will return the last pfn in found online
section. Then we set block_start_pfn to start of page block which
contains the last pfn in section. Then we continue, move one page
block forward and ignore the last page block in the online section.
Make block_start_pfn point to first page block after online section to fix
this:
1. make skip_offline_sections_reverse return end pfn of online section,
i.e. pfn of page block after online section.
2. assign block_start_pfn with next_pfn.

Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/compaction.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 9b7a0a69e19f..ce7841363b12 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
 
 	while (start_nr-- > 0) {
 		if (online_section_nr(start_nr))
-			return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
+			return section_nr_to_pfn(start_nr + 1);
 	}
 
 	return 0;
@@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
 
 			next_pfn = skip_offline_sections_reverse(block_start_pfn);
 			if (next_pfn)
-				block_start_pfn = max(pageblock_start_pfn(next_pfn),
-						      low_pfn);
+				block_start_pfn = max(next_pfn, low_pfn);
 
 			continue;
 		}
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 2/8] mm/compaction: correct last_migrated_pfn update in compact_zone
  2023-07-28 17:10 [PATCH 0/8] Fixes and cleanups to compaction Kemeng Shi
  2023-07-28 17:10 ` [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections Kemeng Shi
@ 2023-07-28 17:10 ` Kemeng Shi
  2023-08-01  2:09   ` Baolin Wang
  2023-07-28 17:10 ` [PATCH 3/8] mm/compaction: skip page block marked skip in isolate_migratepages_block Kemeng Shi
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 36+ messages in thread
From: Kemeng Shi @ 2023-07-28 17:10 UTC (permalink / raw)
  To: akpm, linux-mm, linux-kernel, baolin.wang, mgorman, willy, david
  Cc: shikemeng

We record start pfn of last isolated page block with last_migrated_pfn. And
then:
1. We check if we mark the page block skip for exclusive access in
isolate_migratepages_block by test if next migrate pfn is still in last
isolated page block. If so, we will set finish_pageblock to do the rescan.
2. We check if a full cc->order block is scanned by test if last scan range
passes the cc->order block boundary. If so, we flush the pages were freed.

We treat cc->migrate_pfn before isolate_migratepages as the start pfn of
last isolated page range. However, we always align migrate_pfn to page block
or move to another page block in fast_find_migrateblock or in linearly scan
forward in isolate_migratepages before do page isolation in
isolate_migratepages_block.

Update last_migrated_pfn with pageblock_start_pfn(cc->migrate_pfn - 1)
after scan to correctly set start pfn of last isolated page range.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/compaction.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index ce7841363b12..fb250c6b2b6e 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -2482,7 +2482,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
 			goto check_drain;
 		case ISOLATE_SUCCESS:
 			update_cached = false;
-			last_migrated_pfn = iteration_start_pfn;
+			last_migrated_pfn = max(cc->zone->zone_start_pfn,
+				pageblock_start_pfn(cc->migrate_pfn - 1));
 		}
 
 		err = migrate_pages(&cc->migratepages, compaction_alloc,
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 3/8] mm/compaction: skip page block marked skip in isolate_migratepages_block
  2023-07-28 17:10 [PATCH 0/8] Fixes and cleanups to compaction Kemeng Shi
  2023-07-28 17:10 ` [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections Kemeng Shi
  2023-07-28 17:10 ` [PATCH 2/8] mm/compaction: correct last_migrated_pfn update in compact_zone Kemeng Shi
@ 2023-07-28 17:10 ` Kemeng Shi
  2023-08-01  2:35   ` Baolin Wang
  2023-07-28 17:10 ` [PATCH 4/8] mm/compaction: remove stale fast_find_block flag in isolate_migratepages Kemeng Shi
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 36+ messages in thread
From: Kemeng Shi @ 2023-07-28 17:10 UTC (permalink / raw)
  To: akpm, linux-mm, linux-kernel, baolin.wang, mgorman, willy, david
  Cc: shikemeng

Move migrate_pfn to page block end when block is marked skip to avoid
unnecessary scan retry of that block from upper caller.
For example, compact_zone may wrongly rescan skip page block with
finish_pageblock set as following:
1. cc->migrate point to the start of page block
2. compact_zone record last_migrated_pfn to cc->migrate
3. compact_zone->isolate_migratepages->isolate_migratepages_block tries to
scan the block. The low_pfn maybe moved forward to middle of block because
of free pages at beginning of block.
4. we find first lru page could be isolated but block was exclusive marked
skip.
5. abort isolate_migratepages_block and make cc->migrate_pfn point to
found lru page at middle of block.
6. compact_zone find cc->migrate_pfn and last_migrated_pfn are in the same
block and wrongly rescan the block with finish_pageblock set.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/compaction.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/compaction.c b/mm/compaction.c
index fb250c6b2b6e..ad535f880c70 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1123,6 +1123,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 				skip_updated = true;
 				if (test_and_set_skip(cc, valid_page) &&
 				    !cc->finish_pageblock) {
+					low_pfn = end_pfn;
 					goto isolate_abort;
 				}
 			}
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 4/8] mm/compaction: remove stale fast_find_block flag in isolate_migratepages
  2023-07-28 17:10 [PATCH 0/8] Fixes and cleanups to compaction Kemeng Shi
                   ` (2 preceding siblings ...)
  2023-07-28 17:10 ` [PATCH 3/8] mm/compaction: skip page block marked skip in isolate_migratepages_block Kemeng Shi
@ 2023-07-28 17:10 ` Kemeng Shi
  2023-08-01  2:42   ` Baolin Wang
  2023-07-28 17:10 ` [PATCH 5/8] mm/compaction: corret comment of cached migrate pfn update Kemeng Shi
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 36+ messages in thread
From: Kemeng Shi @ 2023-07-28 17:10 UTC (permalink / raw)
  To: akpm, linux-mm, linux-kernel, baolin.wang, mgorman, willy, david
  Cc: shikemeng

In old code, we set skip to found page block in fast_find_migrateblock. So
we use fast_find_block to avoid skip found page block from
fast_find_migrateblock.
In 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in
fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock,
then fast_find_block is useless.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/compaction.c | 12 +-----------
 1 file changed, 1 insertion(+), 11 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index ad535f880c70..09c36251c613 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1949,7 +1949,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
 	const isolate_mode_t isolate_mode =
 		(sysctl_compact_unevictable_allowed ? ISOLATE_UNEVICTABLE : 0) |
 		(cc->mode != MIGRATE_SYNC ? ISOLATE_ASYNC_MIGRATE : 0);
-	bool fast_find_block;
 
 	/*
 	 * Start at where we last stopped, or beginning of the zone as
@@ -1961,13 +1960,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
 	if (block_start_pfn < cc->zone->zone_start_pfn)
 		block_start_pfn = cc->zone->zone_start_pfn;
 
-	/*
-	 * fast_find_migrateblock marks a pageblock skipped so to avoid
-	 * the isolation_suitable check below, check whether the fast
-	 * search was successful.
-	 */
-	fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail;
-
 	/* Only scan within a pageblock boundary */
 	block_end_pfn = pageblock_end_pfn(low_pfn);
 
@@ -1976,7 +1968,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
 	 * Do not cross the free scanner.
 	 */
 	for (; block_end_pfn <= cc->free_pfn;
-			fast_find_block = false,
 			cc->migrate_pfn = low_pfn = block_end_pfn,
 			block_start_pfn = block_end_pfn,
 			block_end_pfn += pageblock_nr_pages) {
@@ -2007,8 +1998,7 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
 		 * before making it "skip" so other compaction instances do
 		 * not scan the same block.
 		 */
-		if (pageblock_aligned(low_pfn) &&
-		    !fast_find_block && !isolation_suitable(cc, page))
+		if (pageblock_aligned(low_pfn) && !isolation_suitable(cc, page))
 			continue;
 
 		/*
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 5/8] mm/compaction: corret comment of cached migrate pfn update
  2023-07-28 17:10 [PATCH 0/8] Fixes and cleanups to compaction Kemeng Shi
                   ` (3 preceding siblings ...)
  2023-07-28 17:10 ` [PATCH 4/8] mm/compaction: remove stale fast_find_block flag in isolate_migratepages Kemeng Shi
@ 2023-07-28 17:10 ` Kemeng Shi
  2023-07-28 10:35   ` David Hildenbrand
  2023-08-01  2:45   ` Baolin Wang
  2023-07-28 17:10 ` [PATCH 6/8] mm/compaction: correct comment to complete migration failure Kemeng Shi
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 36+ messages in thread
From: Kemeng Shi @ 2023-07-28 17:10 UTC (permalink / raw)
  To: akpm, linux-mm, linux-kernel, baolin.wang, mgorman, willy, david
  Cc: shikemeng

Commit e380bebe47715 ("mm, compaction: keep migration source private to
a single compaction instance") moved update of async and sync
compact_cached_migrate_pfn from update_pageblock_skip to
update_cached_migrate but left the comment behind.
Move the relevant comment to correct this.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/compaction.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 09c36251c613..1eebb61a1f63 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -464,6 +464,7 @@ static void update_cached_migrate(struct compact_control *cc, unsigned long pfn)
 
 	pfn = pageblock_end_pfn(pfn);
 
+	/* Update where async and sync compaction should restart */
 	if (pfn > zone->compact_cached_migrate_pfn[0])
 		zone->compact_cached_migrate_pfn[0] = pfn;
 	if (cc->mode != MIGRATE_ASYNC &&
@@ -485,7 +486,6 @@ static void update_pageblock_skip(struct compact_control *cc,
 
 	set_pageblock_skip(page);
 
-	/* Update where async and sync compaction should restart */
 	if (pfn < zone->compact_cached_free_pfn)
 		zone->compact_cached_free_pfn = pfn;
 }
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 6/8] mm/compaction: correct comment to complete migration failure
  2023-07-28 17:10 [PATCH 0/8] Fixes and cleanups to compaction Kemeng Shi
                   ` (4 preceding siblings ...)
  2023-07-28 17:10 ` [PATCH 5/8] mm/compaction: corret comment of cached migrate pfn update Kemeng Shi
@ 2023-07-28 17:10 ` Kemeng Shi
  2023-07-28 17:10 ` [PATCH 7/8] mm/compaction: remove unnecessary return for void function Kemeng Shi
  2023-07-28 17:10 ` [PATCH 8/8] mm/compaction: only set skip flag if cc->no_set_skip_hint is false Kemeng Shi
  7 siblings, 0 replies; 36+ messages in thread
From: Kemeng Shi @ 2023-07-28 17:10 UTC (permalink / raw)
  To: akpm, linux-mm, linux-kernel, baolin.wang, mgorman, willy, david
  Cc: shikemeng

Commit cfccd2e63e7e0 ("mm, compaction: finish pageblocks on complete
migration failure") convert cc->order aligned check to page block
order aligned check. Correct comment relevant with it.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/compaction.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 1eebb61a1f63..6052cb519de1 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -2497,7 +2497,7 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
 			}
 			/*
 			 * If an ASYNC or SYNC_LIGHT fails to migrate a page
-			 * within the current order-aligned block and
+			 * within the pageblock_order-aligned block and
 			 * fast_find_migrateblock may be used then scan the
 			 * remainder of the pageblock. This will mark the
 			 * pageblock "skip" to avoid rescanning in the near
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 7/8] mm/compaction: remove unnecessary return for void function
  2023-07-28 17:10 [PATCH 0/8] Fixes and cleanups to compaction Kemeng Shi
                   ` (5 preceding siblings ...)
  2023-07-28 17:10 ` [PATCH 6/8] mm/compaction: correct comment to complete migration failure Kemeng Shi
@ 2023-07-28 17:10 ` Kemeng Shi
  2023-07-28 10:30   ` David Hildenbrand
  2023-08-01  2:53   ` Baolin Wang
  2023-07-28 17:10 ` [PATCH 8/8] mm/compaction: only set skip flag if cc->no_set_skip_hint is false Kemeng Shi
  7 siblings, 2 replies; 36+ messages in thread
From: Kemeng Shi @ 2023-07-28 17:10 UTC (permalink / raw)
  To: akpm, linux-mm, linux-kernel, baolin.wang, mgorman, willy, david
  Cc: shikemeng

Remove unnecessary return for void function

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/compaction.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 6052cb519de1..188d610eb3b6 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1420,8 +1420,6 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn)
 	/* Skip this pageblock in the future as it's full or nearly full */
 	if (start_pfn == end_pfn)
 		set_pageblock_skip(page);
-
-	return;
 }
 
 /* Search orders in round-robin fashion */
@@ -2863,7 +2861,7 @@ int compaction_register_node(struct node *node)
 
 void compaction_unregister_node(struct node *node)
 {
-	return device_remove_file(&node->dev, &dev_attr_compact);
+	device_remove_file(&node->dev, &dev_attr_compact);
 }
 #endif /* CONFIG_SYSFS && CONFIG_NUMA */
 
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 8/8] mm/compaction: only set skip flag if cc->no_set_skip_hint is false
  2023-07-28 17:10 [PATCH 0/8] Fixes and cleanups to compaction Kemeng Shi
                   ` (6 preceding siblings ...)
  2023-07-28 17:10 ` [PATCH 7/8] mm/compaction: remove unnecessary return for void function Kemeng Shi
@ 2023-07-28 17:10 ` Kemeng Shi
  2023-07-28 10:33   ` David Hildenbrand
  2023-08-01  3:02   ` Baolin Wang
  7 siblings, 2 replies; 36+ messages in thread
From: Kemeng Shi @ 2023-07-28 17:10 UTC (permalink / raw)
  To: akpm, linux-mm, linux-kernel, baolin.wang, mgorman, willy, david
  Cc: shikemeng

Keep the same logic as update_pageblock_skip, only set skip if
no_set_skip_hint is false which is more reasonable.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/compaction.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 188d610eb3b6..6841c0496223 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1418,7 +1418,7 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn)
 	isolate_freepages_block(cc, &start_pfn, end_pfn, &cc->freepages, 1, false);
 
 	/* Skip this pageblock in the future as it's full or nearly full */
-	if (start_pfn == end_pfn)
+	if (start_pfn == end_pfn && !cc->no_set_skip_hint)
 		set_pageblock_skip(page);
 }
 
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-07-28 10:41   ` David Hildenbrand
@ 2023-07-29  2:23     ` Kemeng Shi
  0 siblings, 0 replies; 36+ messages in thread
From: Kemeng Shi @ 2023-07-29  2:23 UTC (permalink / raw)
  To: David Hildenbrand, akpm, linux-mm, linux-kernel, baolin.wang,
	mgorman, willy



on 7/28/2023 6:41 PM, David Hildenbrand wrote:
> On 28.07.23 19:10, Kemeng Shi wrote:
>> skip_offline_sections_reverse will return the last pfn in found online
>> section. Then we set block_start_pfn to start of page block which
>> contains the last pfn in section. Then we continue, move one page
>> block forward and ignore the last page block in the online section.
>> Make block_start_pfn point to first page block after online section to fix
>> this:
>> 1. make skip_offline_sections_reverse return end pfn of online section,
>> i.e. pfn of page block after online section.
>> 2. assign block_start_pfn with next_pfn.
>>
>> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>> ---
>>   mm/compaction.c | 5 ++---
>>   1 file changed, 2 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 9b7a0a69e19f..ce7841363b12 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
> 
> Can we add a short comment which kind of PFN we return (first pfn of first offline section after an online section)?
> 
Hi David, thanks for the review. Sure, I will add comment to skip_offline_sections_reverse.

-- 
Best wishes
Kemeng Shi
>> @@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>>         while (start_nr-- > 0) {
>>           if (online_section_nr(start_nr))
>> -            return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
>> +            return section_nr_to_pfn(start_nr + 1);
>>       }
>>         return 0;
>> @@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
>>                 next_pfn = skip_offline_sections_reverse(block_start_pfn);
>>               if (next_pfn)
>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>> -                              low_pfn);
>> +                block_start_pfn = max(next_pfn, low_pfn);
> 
> 
> So block_start_pfn() will now point at the first PFN of the offline section.
> 
> Confusing stuff, but I get the idea and I think it makes sense to me :)
> 


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-07-28 17:10 ` [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections Kemeng Shi
  2023-07-28 10:41   ` David Hildenbrand
@ 2023-07-31 12:01   ` Baolin Wang
  2023-08-01  2:18     ` Kemeng Shi
  1 sibling, 1 reply; 36+ messages in thread
From: Baolin Wang @ 2023-07-31 12:01 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, mgorman, willy, david



On 7/29/2023 1:10 AM, Kemeng Shi wrote:
> skip_offline_sections_reverse will return the last pfn in found online
> section. Then we set block_start_pfn to start of page block which
> contains the last pfn in section. Then we continue, move one page
> block forward and ignore the last page block in the online section.
> Make block_start_pfn point to first page block after online section to fix
> this:
> 1. make skip_offline_sections_reverse return end pfn of online section,
> i.e. pfn of page block after online section.
> 2. assign block_start_pfn with next_pfn.
> 
> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>   mm/compaction.c | 5 ++---
>   1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 9b7a0a69e19f..ce7841363b12 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>   
>   	while (start_nr-- > 0) {
>   		if (online_section_nr(start_nr))
> -			return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
> +			return section_nr_to_pfn(start_nr + 1);

This is incorrect, you returned the start pfn of this section.

>   	}
>   
>   	return 0;
> @@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
>   
>   			next_pfn = skip_offline_sections_reverse(block_start_pfn);
>   			if (next_pfn)
> -				block_start_pfn = max(pageblock_start_pfn(next_pfn),
> -						      low_pfn);
> +				block_start_pfn = max(next_pfn, low_pfn);

'block_start_pfn' should be pageblock aligned. If the 'next_pfn' is not 
pageblock-aligned (though this is not the common case), we should skip it.

But if the 'next_pfn' is pageblock-aligned, yes, the commit f63224525309 
still ignores the last pageblock, which is not right. So I think it 
should be:
block_start_pfn = pageblock_aligned(next_pfn) ? : 
pageblock_start_pfn(next_pfn);
block_start_pfn = max(block_start_pfn, low_pfn);

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 2/8] mm/compaction: correct last_migrated_pfn update in compact_zone
  2023-07-28 17:10 ` [PATCH 2/8] mm/compaction: correct last_migrated_pfn update in compact_zone Kemeng Shi
@ 2023-08-01  2:09   ` Baolin Wang
  2023-08-01  2:19     ` Kemeng Shi
  0 siblings, 1 reply; 36+ messages in thread
From: Baolin Wang @ 2023-08-01  2:09 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, mgorman, willy, david



On 7/29/2023 1:10 AM, Kemeng Shi wrote:
> We record start pfn of last isolated page block with last_migrated_pfn. And
> then:
> 1. We check if we mark the page block skip for exclusive access in
> isolate_migratepages_block by test if next migrate pfn is still in last
> isolated page block. If so, we will set finish_pageblock to do the rescan.
> 2. We check if a full cc->order block is scanned by test if last scan range
> passes the cc->order block boundary. If so, we flush the pages were freed.
> 
> We treat cc->migrate_pfn before isolate_migratepages as the start pfn of
> last isolated page range. However, we always align migrate_pfn to page block
> or move to another page block in fast_find_migrateblock or in linearly scan
> forward in isolate_migratepages before do page isolation in
> isolate_migratepages_block.

Right. But can you describe the impact in detail if the 
last_migrated_pfn is not set correctly? For example, this will result in 
rescan not being set correctly to miss a pageblock's rescanning.

Otherwise looks good to me.

> Update last_migrated_pfn with pageblock_start_pfn(cc->migrate_pfn - 1)
> after scan to correctly set start pfn of last isolated page range. > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>   mm/compaction.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index ce7841363b12..fb250c6b2b6e 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -2482,7 +2482,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
>   			goto check_drain;
>   		case ISOLATE_SUCCESS:
>   			update_cached = false;
> -			last_migrated_pfn = iteration_start_pfn;
> +			last_migrated_pfn = max(cc->zone->zone_start_pfn,
> +				pageblock_start_pfn(cc->migrate_pfn - 1));
>   		}
>   
>   		err = migrate_pages(&cc->migratepages, compaction_alloc,

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-07-31 12:01   ` Baolin Wang
@ 2023-08-01  2:18     ` Kemeng Shi
  2023-08-01  2:36       ` Kemeng Shi
  0 siblings, 1 reply; 36+ messages in thread
From: Kemeng Shi @ 2023-08-01  2:18 UTC (permalink / raw)
  To: Baolin Wang, akpm, linux-mm, linux-kernel, mgorman, willy, david



on 7/31/2023 8:01 PM, Baolin Wang wrote:
> 
> 
> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>> skip_offline_sections_reverse will return the last pfn in found online
>> section. Then we set block_start_pfn to start of page block which
>> contains the last pfn in section. Then we continue, move one page
>> block forward and ignore the last page block in the online section.
>> Make block_start_pfn point to first page block after online section to fix
>> this:
>> 1. make skip_offline_sections_reverse return end pfn of online section,
>> i.e. pfn of page block after online section.
>> 2. assign block_start_pfn with next_pfn.
>>
>> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>> ---
>>   mm/compaction.c | 5 ++---
>>   1 file changed, 2 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 9b7a0a69e19f..ce7841363b12 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>>         while (start_nr-- > 0) {
>>           if (online_section_nr(start_nr))
>> -            return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
>> +            return section_nr_to_pfn(start_nr + 1);
> 
> This is incorrect, you returned the start pfn of this section.
> 
>>       }
>>         return 0;
>> @@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
>>                 next_pfn = skip_offline_sections_reverse(block_start_pfn);
>>               if (next_pfn)
>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>> -                              low_pfn);
>> +                block_start_pfn = max(next_pfn, low_pfn);
> 
> 'block_start_pfn' should be pageblock aligned. If the 'next_pfn' is not pageblock-aligned (though this is not the common case), we should skip it.
> 
> But if the 'next_pfn' is pageblock-aligned, yes, the commit f63224525309 still ignores the last pageblock, which is not right. So I think it should be:
> block_start_pfn = pageblock_aligned(next_pfn) ? : pageblock_start_pfn(next_pfn);
> block_start_pfn = max(block_start_pfn, low_pfn);
> 
Hi Baolin, thanks for reply! As skip_offline_sections_reverse is based
on skip_offline_sections. I make the assumption that section is pageblock
aligned based on that we use section start from skip_offline_sections as
block_start_fpn without align check.
If section size is not pageblock aligned in real world, the pageblock aligned
check should be added to skip_offline_sections and skip_offline_sections_reverse.
If no one is against this, I will fix this in next version. THanks!

-- 
Best wishes
Kemeng Shi


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 2/8] mm/compaction: correct last_migrated_pfn update in compact_zone
  2023-08-01  2:09   ` Baolin Wang
@ 2023-08-01  2:19     ` Kemeng Shi
  0 siblings, 0 replies; 36+ messages in thread
From: Kemeng Shi @ 2023-08-01  2:19 UTC (permalink / raw)
  To: Baolin Wang, akpm, linux-mm, linux-kernel, mgorman, willy, david



on 8/1/2023 10:09 AM, Baolin Wang wrote:
> 
> 
> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>> We record start pfn of last isolated page block with last_migrated_pfn. And
>> then:
>> 1. We check if we mark the page block skip for exclusive access in
>> isolate_migratepages_block by test if next migrate pfn is still in last
>> isolated page block. If so, we will set finish_pageblock to do the rescan.
>> 2. We check if a full cc->order block is scanned by test if last scan range
>> passes the cc->order block boundary. If so, we flush the pages were freed.
>>
>> We treat cc->migrate_pfn before isolate_migratepages as the start pfn of
>> last isolated page range. However, we always align migrate_pfn to page block
>> or move to another page block in fast_find_migrateblock or in linearly scan
>> forward in isolate_migratepages before do page isolation in
>> isolate_migratepages_block.
> 
> Right. But can you describe the impact in detail if the last_migrated_pfn is not set correctly? For example, this will result in rescan not being set correctly to miss a pageblock's rescanning.
> 
> Otherwise looks good to me.
> 
Sure, the impact will be added in next version. Thanks!
>> Update last_migrated_pfn with pageblock_start_pfn(cc->migrate_pfn - 1)
>> after scan to correctly set start pfn of last isolated page range. > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>> ---
>>   mm/compaction.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index ce7841363b12..fb250c6b2b6e 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -2482,7 +2482,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
>>               goto check_drain;
>>           case ISOLATE_SUCCESS:
>>               update_cached = false;
>> -            last_migrated_pfn = iteration_start_pfn;
>> +            last_migrated_pfn = max(cc->zone->zone_start_pfn,
>> +                pageblock_start_pfn(cc->migrate_pfn - 1));
>>           }
>>             err = migrate_pages(&cc->migratepages, compaction_alloc,
> 

-- 
Best wishes
Kemeng Shi


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/8] mm/compaction: skip page block marked skip in isolate_migratepages_block
  2023-07-28 17:10 ` [PATCH 3/8] mm/compaction: skip page block marked skip in isolate_migratepages_block Kemeng Shi
@ 2023-08-01  2:35   ` Baolin Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Baolin Wang @ 2023-08-01  2:35 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, mgorman, willy, david



On 7/29/2023 1:10 AM, Kemeng Shi wrote:
> Move migrate_pfn to page block end when block is marked skip to avoid
> unnecessary scan retry of that block from upper caller.
> For example, compact_zone may wrongly rescan skip page block with
> finish_pageblock set as following:
> 1. cc->migrate point to the start of page block
> 2. compact_zone record last_migrated_pfn to cc->migrate
> 3. compact_zone->isolate_migratepages->isolate_migratepages_block tries to
> scan the block. The low_pfn maybe moved forward to middle of block because
> of free pages at beginning of block.
> 4. we find first lru page could be isolated but block was exclusive marked
> skip.
> 5. abort isolate_migratepages_block and make cc->migrate_pfn point to
> found lru page at middle of block.
> 6. compact_zone find cc->migrate_pfn and last_migrated_pfn are in the same
> block and wrongly rescan the block with finish_pageblock set.

Good catch.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>

> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>   mm/compaction.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index fb250c6b2b6e..ad535f880c70 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1123,6 +1123,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>   				skip_updated = true;
>   				if (test_and_set_skip(cc, valid_page) &&
>   				    !cc->finish_pageblock) {
> +					low_pfn = end_pfn;
>   					goto isolate_abort;
>   				}
>   			}

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-08-01  2:18     ` Kemeng Shi
@ 2023-08-01  2:36       ` Kemeng Shi
  2023-08-01  3:53         ` Baolin Wang
  0 siblings, 1 reply; 36+ messages in thread
From: Kemeng Shi @ 2023-08-01  2:36 UTC (permalink / raw)
  To: Baolin Wang, akpm, linux-mm, linux-kernel, mgorman, willy, david



on 8/1/2023 10:18 AM, Kemeng Shi wrote:
> 
> 
> on 7/31/2023 8:01 PM, Baolin Wang wrote:
>>
>>
>> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>>> skip_offline_sections_reverse will return the last pfn in found online
>>> section. Then we set block_start_pfn to start of page block which
>>> contains the last pfn in section. Then we continue, move one page
>>> block forward and ignore the last page block in the online section.
>>> Make block_start_pfn point to first page block after online section to fix
>>> this:
>>> 1. make skip_offline_sections_reverse return end pfn of online section,
>>> i.e. pfn of page block after online section.
>>> 2. assign block_start_pfn with next_pfn.
>>>
>>> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>>> ---
>>>   mm/compaction.c | 5 ++---
>>>   1 file changed, 2 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>> index 9b7a0a69e19f..ce7841363b12 100644
>>> --- a/mm/compaction.c
>>> +++ b/mm/compaction.c
>>> @@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>>>         while (start_nr-- > 0) {
>>>           if (online_section_nr(start_nr))
>>> -            return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
>>> +            return section_nr_to_pfn(start_nr + 1);
>>
>> This is incorrect, you returned the start pfn of this section.
>>
>>>       }
>>>         return 0;
>>> @@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
>>>                 next_pfn = skip_offline_sections_reverse(block_start_pfn);
>>>               if (next_pfn)
>>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>>> -                              low_pfn);
>>> +                block_start_pfn = max(next_pfn, low_pfn);
>>
>> 'block_start_pfn' should be pageblock aligned. If the 'next_pfn' is not pageblock-aligned (though this is not the common case), we should skip it.
>>
>> But if the 'next_pfn' is pageblock-aligned, yes, the commit f63224525309 still ignores the last pageblock, which is not right. So I think it should be:
>> block_start_pfn = pageblock_aligned(next_pfn) ? : pageblock_start_pfn(next_pfn);
>> block_start_pfn = max(block_start_pfn, low_pfn);
>>
> Hi Baolin, thanks for reply! As skip_offline_sections_reverse is based
> on skip_offline_sections. I make the assumption that section is pageblock
> aligned based on that we use section start from skip_offline_sections as
> block_start_fpn without align check.
> If section size is not pageblock aligned in real world, the pageblock aligned
> check should be added to skip_offline_sections and skip_offline_sections_reverse.
> If no one is against this, I will fix this in next version. THanks!
> 
More information of aligment of section. For powerpc arch, we have SECTION_SIZE_BITS
with 24 while PAGE_SHIFT could be configured to 18.
Pageblock order is (18 + MAX_ORDER) which coule be 28 and is > SECTION_SZIE_BITS 24,
then section start is not aligned with pageblock size. Please correct me if I miss
anything. Thanks!

-- 
Best wishes
Kemeng Shi


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/8] mm/compaction: remove stale fast_find_block flag in isolate_migratepages
  2023-07-28 17:10 ` [PATCH 4/8] mm/compaction: remove stale fast_find_block flag in isolate_migratepages Kemeng Shi
@ 2023-08-01  2:42   ` Baolin Wang
  2023-08-01  3:24     ` Kemeng Shi
  0 siblings, 1 reply; 36+ messages in thread
From: Baolin Wang @ 2023-08-01  2:42 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, mgorman, willy, david



On 7/29/2023 1:10 AM, Kemeng Shi wrote:
> In old code, we set skip to found page block in fast_find_migrateblock. So
> we use fast_find_block to avoid skip found page block from
> fast_find_migrateblock.
> In 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in
> fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock,
> then fast_find_block is useless.
> 
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>   mm/compaction.c | 12 +-----------
>   1 file changed, 1 insertion(+), 11 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index ad535f880c70..09c36251c613 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1949,7 +1949,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>   	const isolate_mode_t isolate_mode =
>   		(sysctl_compact_unevictable_allowed ? ISOLATE_UNEVICTABLE : 0) |
>   		(cc->mode != MIGRATE_SYNC ? ISOLATE_ASYNC_MIGRATE : 0);
> -	bool fast_find_block;
>   
>   	/*
>   	 * Start at where we last stopped, or beginning of the zone as
> @@ -1961,13 +1960,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>   	if (block_start_pfn < cc->zone->zone_start_pfn)
>   		block_start_pfn = cc->zone->zone_start_pfn;
>   
> -	/*
> -	 * fast_find_migrateblock marks a pageblock skipped so to avoid
> -	 * the isolation_suitable check below, check whether the fast
> -	 * search was successful.
> -	 */
> -	fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail;
> -
>   	/* Only scan within a pageblock boundary */
>   	block_end_pfn = pageblock_end_pfn(low_pfn);
>   
> @@ -1976,7 +1968,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>   	 * Do not cross the free scanner.
>   	 */
>   	for (; block_end_pfn <= cc->free_pfn;
> -			fast_find_block = false,
>   			cc->migrate_pfn = low_pfn = block_end_pfn,
>   			block_start_pfn = block_end_pfn,
>   			block_end_pfn += pageblock_nr_pages) {
> @@ -2007,8 +1998,7 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>   		 * before making it "skip" so other compaction instances do
>   		 * not scan the same block.
>   		 */
> -		if (pageblock_aligned(low_pfn) &&
> -		    !fast_find_block && !isolation_suitable(cc, page))
> +		if (pageblock_aligned(low_pfn) && !isolation_suitable(cc, page))

I do not think so. If the pageblock is found by 
fast_find_migrateblock(), that means it definitely has not been set the 
skip flag, so there is not need to call isolation_suitable() if 
fast_find_block is true, right?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 5/8] mm/compaction: corret comment of cached migrate pfn update
  2023-07-28 17:10 ` [PATCH 5/8] mm/compaction: corret comment of cached migrate pfn update Kemeng Shi
  2023-07-28 10:35   ` David Hildenbrand
@ 2023-08-01  2:45   ` Baolin Wang
  1 sibling, 0 replies; 36+ messages in thread
From: Baolin Wang @ 2023-08-01  2:45 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, mgorman, willy, david



On 7/29/2023 1:10 AM, Kemeng Shi wrote:
> Commit e380bebe47715 ("mm, compaction: keep migration source private to
> a single compaction instance") moved update of async and sync
> compact_cached_migrate_pfn from update_pageblock_skip to
> update_cached_migrate but left the comment behind.
> Move the relevant comment to correct this.
> 
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>

Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>

> ---
>   mm/compaction.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 09c36251c613..1eebb61a1f63 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -464,6 +464,7 @@ static void update_cached_migrate(struct compact_control *cc, unsigned long pfn)
>   
>   	pfn = pageblock_end_pfn(pfn);
>   
> +	/* Update where async and sync compaction should restart */
>   	if (pfn > zone->compact_cached_migrate_pfn[0])
>   		zone->compact_cached_migrate_pfn[0] = pfn;
>   	if (cc->mode != MIGRATE_ASYNC &&
> @@ -485,7 +486,6 @@ static void update_pageblock_skip(struct compact_control *cc,
>   
>   	set_pageblock_skip(page);
>   
> -	/* Update where async and sync compaction should restart */
>   	if (pfn < zone->compact_cached_free_pfn)
>   		zone->compact_cached_free_pfn = pfn;
>   }

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 7/8] mm/compaction: remove unnecessary return for void function
  2023-07-28 17:10 ` [PATCH 7/8] mm/compaction: remove unnecessary return for void function Kemeng Shi
  2023-07-28 10:30   ` David Hildenbrand
@ 2023-08-01  2:53   ` Baolin Wang
  1 sibling, 0 replies; 36+ messages in thread
From: Baolin Wang @ 2023-08-01  2:53 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, mgorman, willy, david



On 7/29/2023 1:10 AM, Kemeng Shi wrote:
> Remove unnecessary return for void function
> 
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>

Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>

> ---
>   mm/compaction.c | 4 +---
>   1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 6052cb519de1..188d610eb3b6 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1420,8 +1420,6 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn)
>   	/* Skip this pageblock in the future as it's full or nearly full */
>   	if (start_pfn == end_pfn)
>   		set_pageblock_skip(page);
> -
> -	return;
>   }
>   
>   /* Search orders in round-robin fashion */
> @@ -2863,7 +2861,7 @@ int compaction_register_node(struct node *node)
>   
>   void compaction_unregister_node(struct node *node)
>   {
> -	return device_remove_file(&node->dev, &dev_attr_compact);
> +	device_remove_file(&node->dev, &dev_attr_compact);
>   }
>   #endif /* CONFIG_SYSFS && CONFIG_NUMA */
>   

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 8/8] mm/compaction: only set skip flag if cc->no_set_skip_hint is false
  2023-07-28 17:10 ` [PATCH 8/8] mm/compaction: only set skip flag if cc->no_set_skip_hint is false Kemeng Shi
  2023-07-28 10:33   ` David Hildenbrand
@ 2023-08-01  3:02   ` Baolin Wang
  1 sibling, 0 replies; 36+ messages in thread
From: Baolin Wang @ 2023-08-01  3:02 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, mgorman, willy, david



On 7/29/2023 1:10 AM, Kemeng Shi wrote:
> Keep the same logic as update_pageblock_skip, only set skip if
> no_set_skip_hint is false which is more reasonable.

Um, the fast_find_migrateblock() and fast_isolate_freepages() will rely 
on the skip flag and ignore the cc->no_set_skip_hint. So not sure it is 
helpful. Let's see if Mel has some input.

> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>   mm/compaction.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 188d610eb3b6..6841c0496223 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1418,7 +1418,7 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn)
>   	isolate_freepages_block(cc, &start_pfn, end_pfn, &cc->freepages, 1, false);
>   
>   	/* Skip this pageblock in the future as it's full or nearly full */
> -	if (start_pfn == end_pfn)
> +	if (start_pfn == end_pfn && !cc->no_set_skip_hint)
>   		set_pageblock_skip(page);
>   }
>   

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/8] mm/compaction: remove stale fast_find_block flag in isolate_migratepages
  2023-08-01  2:42   ` Baolin Wang
@ 2023-08-01  3:24     ` Kemeng Shi
  2023-08-01  3:34       ` Baolin Wang
  0 siblings, 1 reply; 36+ messages in thread
From: Kemeng Shi @ 2023-08-01  3:24 UTC (permalink / raw)
  To: Baolin Wang, akpm, linux-mm, linux-kernel, mgorman, willy, david



on 8/1/2023 10:42 AM, Baolin Wang wrote:
> 
> 
> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>> In old code, we set skip to found page block in fast_find_migrateblock. So
>> we use fast_find_block to avoid skip found page block from
>> fast_find_migrateblock.
>> In 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in
>> fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock,
>> then fast_find_block is useless.
>>
>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>> ---
>>   mm/compaction.c | 12 +-----------
>>   1 file changed, 1 insertion(+), 11 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index ad535f880c70..09c36251c613 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -1949,7 +1949,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>       const isolate_mode_t isolate_mode =
>>           (sysctl_compact_unevictable_allowed ? ISOLATE_UNEVICTABLE : 0) |
>>           (cc->mode != MIGRATE_SYNC ? ISOLATE_ASYNC_MIGRATE : 0);
>> -    bool fast_find_block;
>>         /*
>>        * Start at where we last stopped, or beginning of the zone as
>> @@ -1961,13 +1960,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>       if (block_start_pfn < cc->zone->zone_start_pfn)
>>           block_start_pfn = cc->zone->zone_start_pfn;
>>   -    /*
>> -     * fast_find_migrateblock marks a pageblock skipped so to avoid
>> -     * the isolation_suitable check below, check whether the fast
>> -     * search was successful.
>> -     */
>> -    fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail;
>> -
>>       /* Only scan within a pageblock boundary */
>>       block_end_pfn = pageblock_end_pfn(low_pfn);
>>   @@ -1976,7 +1968,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>        * Do not cross the free scanner.
>>        */
>>       for (; block_end_pfn <= cc->free_pfn;
>> -            fast_find_block = false,
>>               cc->migrate_pfn = low_pfn = block_end_pfn,
>>               block_start_pfn = block_end_pfn,
>>               block_end_pfn += pageblock_nr_pages) {
>> @@ -2007,8 +1998,7 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>            * before making it "skip" so other compaction instances do
>>            * not scan the same block.
>>            */
>> -        if (pageblock_aligned(low_pfn) &&
>> -            !fast_find_block && !isolation_suitable(cc, page))
>> +        if (pageblock_aligned(low_pfn) && !isolation_suitable(cc, page))
> 
> I do not think so. If the pageblock is found by fast_find_migrateblock(), that means it definitely has not been set the skip flag, so there is not need to call isolation_suitable() if fast_find_block is true, right?
> 
> 
Actually, found pageblock could be set skip as:
1. other compactor could mark this pageblock as skip after zone lock is realeased
in fast_find_migrateblock.
2. fast_find_migrateblock may uses pfn from reinit_migrate_pfn which is previously found
and sacnned. It could be fully sacnned and marked skip after it's first return from
fast_find_migrateblock and it should be skipped.
Thanks!
-- 
Best wishes
Kemeng Shi


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/8] mm/compaction: remove stale fast_find_block flag in isolate_migratepages
  2023-08-01  3:24     ` Kemeng Shi
@ 2023-08-01  3:34       ` Baolin Wang
  2023-08-01  3:48         ` Kemeng Shi
  0 siblings, 1 reply; 36+ messages in thread
From: Baolin Wang @ 2023-08-01  3:34 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, mgorman, willy, david



On 8/1/2023 11:24 AM, Kemeng Shi wrote:
> 
> 
> on 8/1/2023 10:42 AM, Baolin Wang wrote:
>>
>>
>> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>>> In old code, we set skip to found page block in fast_find_migrateblock. So
>>> we use fast_find_block to avoid skip found page block from
>>> fast_find_migrateblock.
>>> In 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in
>>> fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock,
>>> then fast_find_block is useless.
>>>
>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>>> ---
>>>    mm/compaction.c | 12 +-----------
>>>    1 file changed, 1 insertion(+), 11 deletions(-)
>>>
>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>> index ad535f880c70..09c36251c613 100644
>>> --- a/mm/compaction.c
>>> +++ b/mm/compaction.c
>>> @@ -1949,7 +1949,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>>        const isolate_mode_t isolate_mode =
>>>            (sysctl_compact_unevictable_allowed ? ISOLATE_UNEVICTABLE : 0) |
>>>            (cc->mode != MIGRATE_SYNC ? ISOLATE_ASYNC_MIGRATE : 0);
>>> -    bool fast_find_block;
>>>          /*
>>>         * Start at where we last stopped, or beginning of the zone as
>>> @@ -1961,13 +1960,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>>        if (block_start_pfn < cc->zone->zone_start_pfn)
>>>            block_start_pfn = cc->zone->zone_start_pfn;
>>>    -    /*
>>> -     * fast_find_migrateblock marks a pageblock skipped so to avoid
>>> -     * the isolation_suitable check below, check whether the fast
>>> -     * search was successful.
>>> -     */
>>> -    fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail;
>>> -
>>>        /* Only scan within a pageblock boundary */
>>>        block_end_pfn = pageblock_end_pfn(low_pfn);
>>>    @@ -1976,7 +1968,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>>         * Do not cross the free scanner.
>>>         */
>>>        for (; block_end_pfn <= cc->free_pfn;
>>> -            fast_find_block = false,
>>>                cc->migrate_pfn = low_pfn = block_end_pfn,
>>>                block_start_pfn = block_end_pfn,
>>>                block_end_pfn += pageblock_nr_pages) {
>>> @@ -2007,8 +1998,7 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>>             * before making it "skip" so other compaction instances do
>>>             * not scan the same block.
>>>             */
>>> -        if (pageblock_aligned(low_pfn) &&
>>> -            !fast_find_block && !isolation_suitable(cc, page))
>>> +        if (pageblock_aligned(low_pfn) && !isolation_suitable(cc, page))
>>
>> I do not think so. If the pageblock is found by fast_find_migrateblock(), that means it definitely has not been set the skip flag, so there is not need to call isolation_suitable() if fast_find_block is true, right?
>>
>>
> Actually, found pageblock could be set skip as:
> 1. other compactor could mark this pageblock as skip after zone lock is realeased
> in fast_find_migrateblock.

Yes, but your patch also can not close this race window, that means it 
can also be set skip flag after the isolation_suitable() validation by 
other compactors.

> 2. fast_find_migrateblock may uses pfn from reinit_migrate_pfn which is previously found
> and sacnned. It could be fully sacnned and marked skip after it's first return from

Right, but now the 'fast_find_block' is false, and we will call 
isolation_suitable() to validate the skip flag.

> fast_find_migrateblock and it should be skipped.
> Thanks!

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/8] mm/compaction: remove stale fast_find_block flag in isolate_migratepages
  2023-08-01  3:34       ` Baolin Wang
@ 2023-08-01  3:48         ` Kemeng Shi
  2023-08-01  8:15           ` Baolin Wang
  0 siblings, 1 reply; 36+ messages in thread
From: Kemeng Shi @ 2023-08-01  3:48 UTC (permalink / raw)
  To: Baolin Wang, akpm, linux-mm, linux-kernel, mgorman, willy, david



on 8/1/2023 11:34 AM, Baolin Wang wrote:
> 
> 
> On 8/1/2023 11:24 AM, Kemeng Shi wrote:
>>
>>
>> on 8/1/2023 10:42 AM, Baolin Wang wrote:
>>>
>>>
>>> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>>>> In old code, we set skip to found page block in fast_find_migrateblock. So
>>>> we use fast_find_block to avoid skip found page block from
>>>> fast_find_migrateblock.
>>>> In 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in
>>>> fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock,
>>>> then fast_find_block is useless.
>>>>
>>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>>>> ---
>>>>    mm/compaction.c | 12 +-----------
>>>>    1 file changed, 1 insertion(+), 11 deletions(-)
>>>>
>>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>>> index ad535f880c70..09c36251c613 100644
>>>> --- a/mm/compaction.c
>>>> +++ b/mm/compaction.c
>>>> @@ -1949,7 +1949,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>>>        const isolate_mode_t isolate_mode =
>>>>            (sysctl_compact_unevictable_allowed ? ISOLATE_UNEVICTABLE : 0) |
>>>>            (cc->mode != MIGRATE_SYNC ? ISOLATE_ASYNC_MIGRATE : 0);
>>>> -    bool fast_find_block;
>>>>          /*
>>>>         * Start at where we last stopped, or beginning of the zone as
>>>> @@ -1961,13 +1960,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>>>        if (block_start_pfn < cc->zone->zone_start_pfn)
>>>>            block_start_pfn = cc->zone->zone_start_pfn;
>>>>    -    /*
>>>> -     * fast_find_migrateblock marks a pageblock skipped so to avoid
>>>> -     * the isolation_suitable check below, check whether the fast
>>>> -     * search was successful.
>>>> -     */
>>>> -    fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail;
>>>> -
>>>>        /* Only scan within a pageblock boundary */
>>>>        block_end_pfn = pageblock_end_pfn(low_pfn);
>>>>    @@ -1976,7 +1968,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>>>         * Do not cross the free scanner.
>>>>         */
>>>>        for (; block_end_pfn <= cc->free_pfn;
>>>> -            fast_find_block = false,
>>>>                cc->migrate_pfn = low_pfn = block_end_pfn,
>>>>                block_start_pfn = block_end_pfn,
>>>>                block_end_pfn += pageblock_nr_pages) {
>>>> @@ -2007,8 +1998,7 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>>>             * before making it "skip" so other compaction instances do
>>>>             * not scan the same block.
>>>>             */
>>>> -        if (pageblock_aligned(low_pfn) &&
>>>> -            !fast_find_block && !isolation_suitable(cc, page))
>>>> +        if (pageblock_aligned(low_pfn) && !isolation_suitable(cc, page))
>>>
>>> I do not think so. If the pageblock is found by fast_find_migrateblock(), that means it definitely has not been set the skip flag, so there is not need to call isolation_suitable() if fast_find_block is true, right?
>>>
>>>
>> Actually, found pageblock could be set skip as:
>> 1. other compactor could mark this pageblock as skip after zone lock is realeased
>> in fast_find_migrateblock.
> 
> Yes, but your patch also can not close this race window, that means it can also be set skip flag after the isolation_suitable() validation by other compactors.
> 
Yes, I think it's still worth to remove a lot of fast_find_block relevant check and reduce
code complexity with one redundant isolation_suitable which may skip some block with luck.
>> 2. fast_find_migrateblock may uses pfn from reinit_migrate_pfn which is previously found
>> and sacnned. It could be fully sacnned and marked skip after it's first return from
> 
> Right, but now the 'fast_find_block' is false, and we will call isolation_suitable() to validate the skip flag.
> 
Right, sorry for missing that.

But it's ok to keep the fast_find_block if you insist and I will just correct the stale
comment that "fast_find_migrateblock marks a pageblock skipped ..." in next version.
Thanks!
>> fast_find_migrateblock and it should be skipped.
>> Thanks!
> 

-- 
Best wishes
Kemeng Shi


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-08-01  2:36       ` Kemeng Shi
@ 2023-08-01  3:53         ` Baolin Wang
  2023-08-01  6:08           ` Kemeng Shi
  0 siblings, 1 reply; 36+ messages in thread
From: Baolin Wang @ 2023-08-01  3:53 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, mgorman, willy, david



On 8/1/2023 10:36 AM, Kemeng Shi wrote:
> 
> 
> on 8/1/2023 10:18 AM, Kemeng Shi wrote:
>>
>>
>> on 7/31/2023 8:01 PM, Baolin Wang wrote:
>>>
>>>
>>> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>>>> skip_offline_sections_reverse will return the last pfn in found online
>>>> section. Then we set block_start_pfn to start of page block which
>>>> contains the last pfn in section. Then we continue, move one page
>>>> block forward and ignore the last page block in the online section.
>>>> Make block_start_pfn point to first page block after online section to fix
>>>> this:
>>>> 1. make skip_offline_sections_reverse return end pfn of online section,
>>>> i.e. pfn of page block after online section.
>>>> 2. assign block_start_pfn with next_pfn.
>>>>
>>>> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
>>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>>>> ---
>>>>    mm/compaction.c | 5 ++---
>>>>    1 file changed, 2 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>>> index 9b7a0a69e19f..ce7841363b12 100644
>>>> --- a/mm/compaction.c
>>>> +++ b/mm/compaction.c
>>>> @@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>>>>          while (start_nr-- > 0) {
>>>>            if (online_section_nr(start_nr))
>>>> -            return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
>>>> +            return section_nr_to_pfn(start_nr + 1);
>>>
>>> This is incorrect, you returned the start pfn of this section.
>>>
>>>>        }
>>>>          return 0;
>>>> @@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
>>>>                  next_pfn = skip_offline_sections_reverse(block_start_pfn);
>>>>                if (next_pfn)
>>>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>>>> -                              low_pfn);
>>>> +                block_start_pfn = max(next_pfn, low_pfn);
>>>
>>> 'block_start_pfn' should be pageblock aligned. If the 'next_pfn' is not pageblock-aligned (though this is not the common case), we should skip it.
>>>
>>> But if the 'next_pfn' is pageblock-aligned, yes, the commit f63224525309 still ignores the last pageblock, which is not right. So I think it should be:
>>> block_start_pfn = pageblock_aligned(next_pfn) ? : pageblock_start_pfn(next_pfn);
>>> block_start_pfn = max(block_start_pfn, low_pfn);
>>>
>> Hi Baolin, thanks for reply! As skip_offline_sections_reverse is based
>> on skip_offline_sections. I make the assumption that section is pageblock
>> aligned based on that we use section start from skip_offline_sections as
>> block_start_fpn without align check.
>> If section size is not pageblock aligned in real world, the pageblock aligned
>> check should be added to skip_offline_sections and skip_offline_sections_reverse.
>> If no one is against this, I will fix this in next version. THanks!
>>
> More information of aligment of section. For powerpc arch, we have SECTION_SIZE_BITS
> with 24 while PAGE_SHIFT could be configured to 18.
> Pageblock order is (18 + MAX_ORDER) which coule be 28 and is > SECTION_SZIE_BITS 24,

The maximum pageblock order is MAX_ORDER. But after thinking more, I 
think return the start pfn or end pfn of a section is okay, and it 
should be aligned to a pageblock order IIUC.

So I think your change is good:
+ block_start_pfn = max(next_pfn, low_pfn);

But in skip_offline_sections_reverse(), we should still return the last 
pfn of the online section.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-08-01  3:53         ` Baolin Wang
@ 2023-08-01  6:08           ` Kemeng Shi
  2023-08-01  8:01             ` Baolin Wang
  0 siblings, 1 reply; 36+ messages in thread
From: Kemeng Shi @ 2023-08-01  6:08 UTC (permalink / raw)
  To: Baolin Wang, akpm, linux-mm, linux-kernel, mgorman, willy, david



on 8/1/2023 11:53 AM, Baolin Wang wrote:
> 
> 
> On 8/1/2023 10:36 AM, Kemeng Shi wrote:
>>
>>
>> on 8/1/2023 10:18 AM, Kemeng Shi wrote:
>>>
>>>
>>> on 7/31/2023 8:01 PM, Baolin Wang wrote:
>>>>
>>>>
>>>> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>>>>> skip_offline_sections_reverse will return the last pfn in found online
>>>>> section. Then we set block_start_pfn to start of page block which
>>>>> contains the last pfn in section. Then we continue, move one page
>>>>> block forward and ignore the last page block in the online section.
>>>>> Make block_start_pfn point to first page block after online section to fix
>>>>> this:
>>>>> 1. make skip_offline_sections_reverse return end pfn of online section,
>>>>> i.e. pfn of page block after online section.
>>>>> 2. assign block_start_pfn with next_pfn.
>>>>>
>>>>> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
>>>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>>>>> ---
>>>>>    mm/compaction.c | 5 ++---
>>>>>    1 file changed, 2 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>>>> index 9b7a0a69e19f..ce7841363b12 100644
>>>>> --- a/mm/compaction.c
>>>>> +++ b/mm/compaction.c
>>>>> @@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>>>>>          while (start_nr-- > 0) {
>>>>>            if (online_section_nr(start_nr))
>>>>> -            return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
>>>>> +            return section_nr_to_pfn(start_nr + 1);
>>>>
>>>> This is incorrect, you returned the start pfn of this section.
>>>>
>>>>>        }
>>>>>          return 0;
>>>>> @@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
>>>>>                  next_pfn = skip_offline_sections_reverse(block_start_pfn);
>>>>>                if (next_pfn)
>>>>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>>>>> -                              low_pfn);
>>>>> +                block_start_pfn = max(next_pfn, low_pfn);
>>>>
>>>> 'block_start_pfn' should be pageblock aligned. If the 'next_pfn' is not pageblock-aligned (though this is not the common case), we should skip it.
>>>>
>>>> But if the 'next_pfn' is pageblock-aligned, yes, the commit f63224525309 still ignores the last pageblock, which is not right. So I think it should be:
>>>> block_start_pfn = pageblock_aligned(next_pfn) ? : pageblock_start_pfn(next_pfn);
>>>> block_start_pfn = max(block_start_pfn, low_pfn);
>>>>
>>> Hi Baolin, thanks for reply! As skip_offline_sections_reverse is based
>>> on skip_offline_sections. I make the assumption that section is pageblock
>>> aligned based on that we use section start from skip_offline_sections as
>>> block_start_fpn without align check.
>>> If section size is not pageblock aligned in real world, the pageblock aligned
>>> check should be added to skip_offline_sections and skip_offline_sections_reverse.
>>> If no one is against this, I will fix this in next version. THanks!
>>>
>> More information of aligment of section. For powerpc arch, we have SECTION_SIZE_BITS
>> with 24 while PAGE_SHIFT could be configured to 18.
>> Pageblock order is (18 + MAX_ORDER) which coule be 28 and is > SECTION_SZIE_BITS 24,
> 
> The maximum pageblock order is MAX_ORDER. But after thinking more, I think return the start pfn or end pfn of a section is okay, and it should be aligned to a pageblock order IIUC.
> 
Right, I mixed up the unit.
> So I think your change is good:
> + block_start_pfn = max(next_pfn, low_pfn);
> 
> But in skip_offline_sections_reverse(), we should still return the last pfn of the online section.
> 
Sure, then we should assign block_start_pfn with following change. Is this good to you?
-                block_start_pfn = max(pageblock_start_pfn(next_pfn),
+		 block_start_pfn = max(pageblock_end_pfn(next_pfn),
                              low_pfn);

-- 
Best wishes
Kemeng Shi


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-08-01  6:08           ` Kemeng Shi
@ 2023-08-01  8:01             ` Baolin Wang
  2023-08-01  8:42               ` Kemeng Shi
  0 siblings, 1 reply; 36+ messages in thread
From: Baolin Wang @ 2023-08-01  8:01 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, mgorman, willy, david



On 8/1/2023 2:08 PM, Kemeng Shi wrote:
> 
> 
> on 8/1/2023 11:53 AM, Baolin Wang wrote:
>>
>>
>> On 8/1/2023 10:36 AM, Kemeng Shi wrote:
>>>
>>>
>>> on 8/1/2023 10:18 AM, Kemeng Shi wrote:
>>>>
>>>>
>>>> on 7/31/2023 8:01 PM, Baolin Wang wrote:
>>>>>
>>>>>
>>>>> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>>>>>> skip_offline_sections_reverse will return the last pfn in found online
>>>>>> section. Then we set block_start_pfn to start of page block which
>>>>>> contains the last pfn in section. Then we continue, move one page
>>>>>> block forward and ignore the last page block in the online section.
>>>>>> Make block_start_pfn point to first page block after online section to fix
>>>>>> this:
>>>>>> 1. make skip_offline_sections_reverse return end pfn of online section,
>>>>>> i.e. pfn of page block after online section.
>>>>>> 2. assign block_start_pfn with next_pfn.
>>>>>>
>>>>>> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
>>>>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>>>>>> ---
>>>>>>     mm/compaction.c | 5 ++---
>>>>>>     1 file changed, 2 insertions(+), 3 deletions(-)
>>>>>>
>>>>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>>>>> index 9b7a0a69e19f..ce7841363b12 100644
>>>>>> --- a/mm/compaction.c
>>>>>> +++ b/mm/compaction.c
>>>>>> @@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>>>>>>           while (start_nr-- > 0) {
>>>>>>             if (online_section_nr(start_nr))
>>>>>> -            return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
>>>>>> +            return section_nr_to_pfn(start_nr + 1);
>>>>>
>>>>> This is incorrect, you returned the start pfn of this section.
>>>>>
>>>>>>         }
>>>>>>           return 0;
>>>>>> @@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
>>>>>>                   next_pfn = skip_offline_sections_reverse(block_start_pfn);
>>>>>>                 if (next_pfn)
>>>>>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>>>>>> -                              low_pfn);
>>>>>> +                block_start_pfn = max(next_pfn, low_pfn);
>>>>>
>>>>> 'block_start_pfn' should be pageblock aligned. If the 'next_pfn' is not pageblock-aligned (though this is not the common case), we should skip it.
>>>>>
>>>>> But if the 'next_pfn' is pageblock-aligned, yes, the commit f63224525309 still ignores the last pageblock, which is not right. So I think it should be:
>>>>> block_start_pfn = pageblock_aligned(next_pfn) ? : pageblock_start_pfn(next_pfn);
>>>>> block_start_pfn = max(block_start_pfn, low_pfn);
>>>>>
>>>> Hi Baolin, thanks for reply! As skip_offline_sections_reverse is based
>>>> on skip_offline_sections. I make the assumption that section is pageblock
>>>> aligned based on that we use section start from skip_offline_sections as
>>>> block_start_fpn without align check.
>>>> If section size is not pageblock aligned in real world, the pageblock aligned
>>>> check should be added to skip_offline_sections and skip_offline_sections_reverse.
>>>> If no one is against this, I will fix this in next version. THanks!
>>>>
>>> More information of aligment of section. For powerpc arch, we have SECTION_SIZE_BITS
>>> with 24 while PAGE_SHIFT could be configured to 18.
>>> Pageblock order is (18 + MAX_ORDER) which coule be 28 and is > SECTION_SZIE_BITS 24,
>>
>> The maximum pageblock order is MAX_ORDER. But after thinking more, I think return the start pfn or end pfn of a section is okay, and it should be aligned to a pageblock order IIUC.
>>
> Right, I mixed up the unit.
>> So I think your change is good:
>> + block_start_pfn = max(next_pfn, low_pfn);
>>
>> But in skip_offline_sections_reverse(), we should still return the last pfn of the online section.
>>
> Sure, then we should assign block_start_pfn with following change. Is this good to you?
> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
> +		 block_start_pfn = max(pageblock_end_pfn(next_pfn),
>                                low_pfn);

The last pfn of a section is already section aligned, so I think no need 
to call pageblock_end_pfn(), just like your original change is okay to me.
block_start_pfn = max(next_pfn, low_pfn);

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/8] mm/compaction: remove stale fast_find_block flag in isolate_migratepages
  2023-08-01  3:48         ` Kemeng Shi
@ 2023-08-01  8:15           ` Baolin Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Baolin Wang @ 2023-08-01  8:15 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, mgorman, willy, david



On 8/1/2023 11:48 AM, Kemeng Shi wrote:
> 
> 
> on 8/1/2023 11:34 AM, Baolin Wang wrote:
>>
>>
>> On 8/1/2023 11:24 AM, Kemeng Shi wrote:
>>>
>>>
>>> on 8/1/2023 10:42 AM, Baolin Wang wrote:
>>>>
>>>>
>>>> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>>>>> In old code, we set skip to found page block in fast_find_migrateblock. So
>>>>> we use fast_find_block to avoid skip found page block from
>>>>> fast_find_migrateblock.
>>>>> In 90ed667c03fe5 ("Revert "Revert "mm/compaction: fix set skip in
>>>>> fast_find_migrateblock"""), we remove skip set in fast_find_migrateblock,
>>>>> then fast_find_block is useless.
>>>>>
>>>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>>>>> ---
>>>>>     mm/compaction.c | 12 +-----------
>>>>>     1 file changed, 1 insertion(+), 11 deletions(-)
>>>>>
>>>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>>>> index ad535f880c70..09c36251c613 100644
>>>>> --- a/mm/compaction.c
>>>>> +++ b/mm/compaction.c
>>>>> @@ -1949,7 +1949,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>>>>         const isolate_mode_t isolate_mode =
>>>>>             (sysctl_compact_unevictable_allowed ? ISOLATE_UNEVICTABLE : 0) |
>>>>>             (cc->mode != MIGRATE_SYNC ? ISOLATE_ASYNC_MIGRATE : 0);
>>>>> -    bool fast_find_block;
>>>>>           /*
>>>>>          * Start at where we last stopped, or beginning of the zone as
>>>>> @@ -1961,13 +1960,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>>>>         if (block_start_pfn < cc->zone->zone_start_pfn)
>>>>>             block_start_pfn = cc->zone->zone_start_pfn;
>>>>>     -    /*
>>>>> -     * fast_find_migrateblock marks a pageblock skipped so to avoid
>>>>> -     * the isolation_suitable check below, check whether the fast
>>>>> -     * search was successful.
>>>>> -     */
>>>>> -    fast_find_block = low_pfn != cc->migrate_pfn && !cc->fast_search_fail;
>>>>> -
>>>>>         /* Only scan within a pageblock boundary */
>>>>>         block_end_pfn = pageblock_end_pfn(low_pfn);
>>>>>     @@ -1976,7 +1968,6 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>>>>          * Do not cross the free scanner.
>>>>>          */
>>>>>         for (; block_end_pfn <= cc->free_pfn;
>>>>> -            fast_find_block = false,
>>>>>                 cc->migrate_pfn = low_pfn = block_end_pfn,
>>>>>                 block_start_pfn = block_end_pfn,
>>>>>                 block_end_pfn += pageblock_nr_pages) {
>>>>> @@ -2007,8 +1998,7 @@ static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
>>>>>              * before making it "skip" so other compaction instances do
>>>>>              * not scan the same block.
>>>>>              */
>>>>> -        if (pageblock_aligned(low_pfn) &&
>>>>> -            !fast_find_block && !isolation_suitable(cc, page))
>>>>> +        if (pageblock_aligned(low_pfn) && !isolation_suitable(cc, page))
>>>>
>>>> I do not think so. If the pageblock is found by fast_find_migrateblock(), that means it definitely has not been set the skip flag, so there is not need to call isolation_suitable() if fast_find_block is true, right?
>>>>
>>>>
>>> Actually, found pageblock could be set skip as:
>>> 1. other compactor could mark this pageblock as skip after zone lock is realeased
>>> in fast_find_migrateblock.
>>
>> Yes, but your patch also can not close this race window, that means it can also be set skip flag after the isolation_suitable() validation by other compactors.
>>
> Yes, I think it's still worth to remove a lot of fast_find_block relevant check and reduce
> code complexity with one redundant isolation_suitable which may skip some block with luck.
>>> 2. fast_find_migrateblock may uses pfn from reinit_migrate_pfn which is previously found
>>> and sacnned. It could be fully sacnned and marked skip after it's first return from
>>
>> Right, but now the 'fast_find_block' is false, and we will call isolation_suitable() to validate the skip flag.
>>
> Right, sorry for missing that.
> 
> But it's ok to keep the fast_find_block if you insist and I will just correct the stale

Yes, I still prefer to keep the fast_find_block, since I did not see 
this patch can fix any real issue and might have a side effect for 
fast-find-pageblock(?).

> comment that "fast_find_migrateblock marks a pageblock skipped ..." in next version.

Sure, please do it.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-08-01  8:01             ` Baolin Wang
@ 2023-08-01  8:42               ` Kemeng Shi
  2023-08-01  9:32                 ` Baolin Wang
  0 siblings, 1 reply; 36+ messages in thread
From: Kemeng Shi @ 2023-08-01  8:42 UTC (permalink / raw)
  To: Baolin Wang, akpm, linux-mm, linux-kernel, mgorman, willy, david



on 8/1/2023 4:01 PM, Baolin Wang wrote:
> 
> 
> On 8/1/2023 2:08 PM, Kemeng Shi wrote:
>>
>>
>> on 8/1/2023 11:53 AM, Baolin Wang wrote:
>>>
>>>
>>> On 8/1/2023 10:36 AM, Kemeng Shi wrote:
>>>>
>>>>
>>>> on 8/1/2023 10:18 AM, Kemeng Shi wrote:
>>>>>
>>>>>
>>>>> on 7/31/2023 8:01 PM, Baolin Wang wrote:
>>>>>>
>>>>>>
>>>>>> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>>>>>>> skip_offline_sections_reverse will return the last pfn in found online
>>>>>>> section. Then we set block_start_pfn to start of page block which
>>>>>>> contains the last pfn in section. Then we continue, move one page
>>>>>>> block forward and ignore the last page block in the online section.
>>>>>>> Make block_start_pfn point to first page block after online section to fix
>>>>>>> this:
>>>>>>> 1. make skip_offline_sections_reverse return end pfn of online section,
>>>>>>> i.e. pfn of page block after online section.
>>>>>>> 2. assign block_start_pfn with next_pfn.
>>>>>>>
>>>>>>> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
>>>>>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>>>>>>> ---
>>>>>>>     mm/compaction.c | 5 ++---
>>>>>>>     1 file changed, 2 insertions(+), 3 deletions(-)
>>>>>>>
>>>>>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>>>>>> index 9b7a0a69e19f..ce7841363b12 100644
>>>>>>> --- a/mm/compaction.c
>>>>>>> +++ b/mm/compaction.c
>>>>>>> @@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>>>>>>>           while (start_nr-- > 0) {
>>>>>>>             if (online_section_nr(start_nr))
>>>>>>> -            return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
>>>>>>> +            return section_nr_to_pfn(start_nr + 1);
>>>>>>
>>>>>> This is incorrect, you returned the start pfn of this section.
>>>>>>
>>>>>>>         }
>>>>>>>           return 0;
>>>>>>> @@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
>>>>>>>                   next_pfn = skip_offline_sections_reverse(block_start_pfn);
>>>>>>>                 if (next_pfn)
>>>>>>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>>>>>>> -                              low_pfn);
>>>>>>> +                block_start_pfn = max(next_pfn, low_pfn);
>>>>>>
>>>>>> 'block_start_pfn' should be pageblock aligned. If the 'next_pfn' is not pageblock-aligned (though this is not the common case), we should skip it.
>>>>>>
>>>>>> But if the 'next_pfn' is pageblock-aligned, yes, the commit f63224525309 still ignores the last pageblock, which is not right. So I think it should be:
>>>>>> block_start_pfn = pageblock_aligned(next_pfn) ? : pageblock_start_pfn(next_pfn);
>>>>>> block_start_pfn = max(block_start_pfn, low_pfn);
>>>>>>
>>>>> Hi Baolin, thanks for reply! As skip_offline_sections_reverse is based
>>>>> on skip_offline_sections. I make the assumption that section is pageblock
>>>>> aligned based on that we use section start from skip_offline_sections as
>>>>> block_start_fpn without align check.
>>>>> If section size is not pageblock aligned in real world, the pageblock aligned
>>>>> check should be added to skip_offline_sections and skip_offline_sections_reverse.
>>>>> If no one is against this, I will fix this in next version. THanks!
>>>>>
>>>> More information of aligment of section. For powerpc arch, we have SECTION_SIZE_BITS
>>>> with 24 while PAGE_SHIFT could be configured to 18.
>>>> Pageblock order is (18 + MAX_ORDER) which coule be 28 and is > SECTION_SZIE_BITS 24,
>>>
>>> The maximum pageblock order is MAX_ORDER. But after thinking more, I think return the start pfn or end pfn of a section is okay, and it should be aligned to a pageblock order IIUC.
>>>
>> Right, I mixed up the unit.
>>> So I think your change is good:
>>> + block_start_pfn = max(next_pfn, low_pfn);
>>>
>>> But in skip_offline_sections_reverse(), we should still return the last pfn of the online section.
>>>
>> Sure, then we should assign block_start_pfn with following change. Is this good to you?
>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>> +         block_start_pfn = max(pageblock_end_pfn(next_pfn),
>>                                low_pfn);
> 
> The last pfn of a section is already section aligned, so I think no need to call pageblock_end_pfn(), just like your original change is okay to me.
> block_start_pfn = max(next_pfn, low_pfn);
> 
> 
Um, if we keep "block_start_pfn = max(next_pfn, low_pfn);", should we also keep
returning end of section "section_nr_to_pfn(start_nr + 1);" instead of original last
pfn of the section "section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;" which seems
not aligned.
Assume SECTION_SIZE_BITS = 27, PAGE_SHIFT = 12, pageblock order = 10
Last pfn of the section 0 is 0x7fff, end pfn of section 0 is 0x8000. The last pfn
is not aligned.
Please tell me if I misunderstand anything. Thanks!

-- 
Best wishes
Kemeng Shi


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-08-01  8:42               ` Kemeng Shi
@ 2023-08-01  9:32                 ` Baolin Wang
  2023-08-01 12:33                   ` Kemeng Shi
  0 siblings, 1 reply; 36+ messages in thread
From: Baolin Wang @ 2023-08-01  9:32 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, mgorman, willy, david



On 8/1/2023 4:42 PM, Kemeng Shi wrote:
> 
> 
> on 8/1/2023 4:01 PM, Baolin Wang wrote:
>>
>>
>> On 8/1/2023 2:08 PM, Kemeng Shi wrote:
>>>
>>>
>>> on 8/1/2023 11:53 AM, Baolin Wang wrote:
>>>>
>>>>
>>>> On 8/1/2023 10:36 AM, Kemeng Shi wrote:
>>>>>
>>>>>
>>>>> on 8/1/2023 10:18 AM, Kemeng Shi wrote:
>>>>>>
>>>>>>
>>>>>> on 7/31/2023 8:01 PM, Baolin Wang wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>>>>>>>> skip_offline_sections_reverse will return the last pfn in found online
>>>>>>>> section. Then we set block_start_pfn to start of page block which
>>>>>>>> contains the last pfn in section. Then we continue, move one page
>>>>>>>> block forward and ignore the last page block in the online section.
>>>>>>>> Make block_start_pfn point to first page block after online section to fix
>>>>>>>> this:
>>>>>>>> 1. make skip_offline_sections_reverse return end pfn of online section,
>>>>>>>> i.e. pfn of page block after online section.
>>>>>>>> 2. assign block_start_pfn with next_pfn.
>>>>>>>>
>>>>>>>> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
>>>>>>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>>>>>>>> ---
>>>>>>>>      mm/compaction.c | 5 ++---
>>>>>>>>      1 file changed, 2 insertions(+), 3 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>>>>>>> index 9b7a0a69e19f..ce7841363b12 100644
>>>>>>>> --- a/mm/compaction.c
>>>>>>>> +++ b/mm/compaction.c
>>>>>>>> @@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>>>>>>>>            while (start_nr-- > 0) {
>>>>>>>>              if (online_section_nr(start_nr))
>>>>>>>> -            return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
>>>>>>>> +            return section_nr_to_pfn(start_nr + 1);
>>>>>>>
>>>>>>> This is incorrect, you returned the start pfn of this section.
>>>>>>>
>>>>>>>>          }
>>>>>>>>            return 0;
>>>>>>>> @@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
>>>>>>>>                    next_pfn = skip_offline_sections_reverse(block_start_pfn);
>>>>>>>>                  if (next_pfn)
>>>>>>>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>>>>>>>> -                              low_pfn);
>>>>>>>> +                block_start_pfn = max(next_pfn, low_pfn);
>>>>>>>
>>>>>>> 'block_start_pfn' should be pageblock aligned. If the 'next_pfn' is not pageblock-aligned (though this is not the common case), we should skip it.
>>>>>>>
>>>>>>> But if the 'next_pfn' is pageblock-aligned, yes, the commit f63224525309 still ignores the last pageblock, which is not right. So I think it should be:
>>>>>>> block_start_pfn = pageblock_aligned(next_pfn) ? : pageblock_start_pfn(next_pfn);
>>>>>>> block_start_pfn = max(block_start_pfn, low_pfn);
>>>>>>>
>>>>>> Hi Baolin, thanks for reply! As skip_offline_sections_reverse is based
>>>>>> on skip_offline_sections. I make the assumption that section is pageblock
>>>>>> aligned based on that we use section start from skip_offline_sections as
>>>>>> block_start_fpn without align check.
>>>>>> If section size is not pageblock aligned in real world, the pageblock aligned
>>>>>> check should be added to skip_offline_sections and skip_offline_sections_reverse.
>>>>>> If no one is against this, I will fix this in next version. THanks!
>>>>>>
>>>>> More information of aligment of section. For powerpc arch, we have SECTION_SIZE_BITS
>>>>> with 24 while PAGE_SHIFT could be configured to 18.
>>>>> Pageblock order is (18 + MAX_ORDER) which coule be 28 and is > SECTION_SZIE_BITS 24,
>>>>
>>>> The maximum pageblock order is MAX_ORDER. But after thinking more, I think return the start pfn or end pfn of a section is okay, and it should be aligned to a pageblock order IIUC.
>>>>
>>> Right, I mixed up the unit.
>>>> So I think your change is good:
>>>> + block_start_pfn = max(next_pfn, low_pfn);
>>>>
>>>> But in skip_offline_sections_reverse(), we should still return the last pfn of the online section.
>>>>
>>> Sure, then we should assign block_start_pfn with following change. Is this good to you?
>>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>>> +         block_start_pfn = max(pageblock_end_pfn(next_pfn),
>>>                                 low_pfn);
>>
>> The last pfn of a section is already section aligned, so I think no need to call pageblock_end_pfn(), just like your original change is okay to me.
>> block_start_pfn = max(next_pfn, low_pfn);
>>
>>
> Um, if we keep "block_start_pfn = max(next_pfn, low_pfn);", should we also keep
> returning end of section "section_nr_to_pfn(start_nr + 1);" instead of original last
> pfn of the section "section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;" which seems
> not aligned.
> Assume SECTION_SIZE_BITS = 27, PAGE_SHIFT = 12, pageblock order = 10
> Last pfn of the section 0 is 0x7fff, end pfn of section 0 is 0x8000. The last pfn
> is not aligned.
> Please tell me if I misunderstand anything. Thanks!

Ah, you are right, sorry for my bad arithmetic. Maybe we should return 
the end pfn (section_nr_to_pfn(start_nr) + PAGES_PER_SECTION) of the 
section in skip_offline_sections_reverse() with adding some comments to 
explain the return value like David suggested. Then we can remove the 
pageblock_end_pfn() in isolate_freepages().

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-08-01  9:32                 ` Baolin Wang
@ 2023-08-01 12:33                   ` Kemeng Shi
  2023-08-02  1:11                     ` Baolin Wang
  0 siblings, 1 reply; 36+ messages in thread
From: Kemeng Shi @ 2023-08-01 12:33 UTC (permalink / raw)
  To: Baolin Wang, akpm, linux-mm, linux-kernel, mgorman, willy, david



on 8/1/2023 5:32 PM, Baolin Wang wrote:
> 
> 
> On 8/1/2023 4:42 PM, Kemeng Shi wrote:
>>
>>
>> on 8/1/2023 4:01 PM, Baolin Wang wrote:
>>>
>>>
>>> On 8/1/2023 2:08 PM, Kemeng Shi wrote:
>>>>
>>>>
>>>> on 8/1/2023 11:53 AM, Baolin Wang wrote:
>>>>>
>>>>>
>>>>> On 8/1/2023 10:36 AM, Kemeng Shi wrote:
>>>>>>
>>>>>>
>>>>>> on 8/1/2023 10:18 AM, Kemeng Shi wrote:
>>>>>>>
>>>>>>>
>>>>>>> on 7/31/2023 8:01 PM, Baolin Wang wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>>>>>>>>> skip_offline_sections_reverse will return the last pfn in found online
>>>>>>>>> section. Then we set block_start_pfn to start of page block which
>>>>>>>>> contains the last pfn in section. Then we continue, move one page
>>>>>>>>> block forward and ignore the last page block in the online section.
>>>>>>>>> Make block_start_pfn point to first page block after online section to fix
>>>>>>>>> this:
>>>>>>>>> 1. make skip_offline_sections_reverse return end pfn of online section,
>>>>>>>>> i.e. pfn of page block after online section.
>>>>>>>>> 2. assign block_start_pfn with next_pfn.
>>>>>>>>>
>>>>>>>>> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
>>>>>>>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>>>>>>>>> ---
>>>>>>>>>      mm/compaction.c | 5 ++---
>>>>>>>>>      1 file changed, 2 insertions(+), 3 deletions(-)
>>>>>>>>>
>>>>>>>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>>>>>>>> index 9b7a0a69e19f..ce7841363b12 100644
>>>>>>>>> --- a/mm/compaction.c
>>>>>>>>> +++ b/mm/compaction.c
>>>>>>>>> @@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>>>>>>>>>            while (start_nr-- > 0) {
>>>>>>>>>              if (online_section_nr(start_nr))
>>>>>>>>> -            return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
>>>>>>>>> +            return section_nr_to_pfn(start_nr + 1);
>>>>>>>>
>>>>>>>> This is incorrect, you returned the start pfn of this section.
>>>>>>>>
>>>>>>>>>          }
>>>>>>>>>            return 0;
>>>>>>>>> @@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
>>>>>>>>>                    next_pfn = skip_offline_sections_reverse(block_start_pfn);
>>>>>>>>>                  if (next_pfn)
>>>>>>>>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>>>>>>>>> -                              low_pfn);
>>>>>>>>> +                block_start_pfn = max(next_pfn, low_pfn);
>>>>>>>>
>>>>>>>> 'block_start_pfn' should be pageblock aligned. If the 'next_pfn' is not pageblock-aligned (though this is not the common case), we should skip it.
>>>>>>>>
>>>>>>>> But if the 'next_pfn' is pageblock-aligned, yes, the commit f63224525309 still ignores the last pageblock, which is not right. So I think it should be:
>>>>>>>> block_start_pfn = pageblock_aligned(next_pfn) ? : pageblock_start_pfn(next_pfn);
>>>>>>>> block_start_pfn = max(block_start_pfn, low_pfn);
>>>>>>>>
>>>>>>> Hi Baolin, thanks for reply! As skip_offline_sections_reverse is based
>>>>>>> on skip_offline_sections. I make the assumption that section is pageblock
>>>>>>> aligned based on that we use section start from skip_offline_sections as
>>>>>>> block_start_fpn without align check.
>>>>>>> If section size is not pageblock aligned in real world, the pageblock aligned
>>>>>>> check should be added to skip_offline_sections and skip_offline_sections_reverse.
>>>>>>> If no one is against this, I will fix this in next version. THanks!
>>>>>>>
>>>>>> More information of aligment of section. For powerpc arch, we have SECTION_SIZE_BITS
>>>>>> with 24 while PAGE_SHIFT could be configured to 18.
>>>>>> Pageblock order is (18 + MAX_ORDER) which coule be 28 and is > SECTION_SZIE_BITS 24,
>>>>>
>>>>> The maximum pageblock order is MAX_ORDER. But after thinking more, I think return the start pfn or end pfn of a section is okay, and it should be aligned to a pageblock order IIUC.
>>>>>
>>>> Right, I mixed up the unit.
>>>>> So I think your change is good:
>>>>> + block_start_pfn = max(next_pfn, low_pfn);
>>>>>
>>>>> But in skip_offline_sections_reverse(), we should still return the last pfn of the online section.
>>>>>
>>>> Sure, then we should assign block_start_pfn with following change. Is this good to you?
>>>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>>>> +         block_start_pfn = max(pageblock_end_pfn(next_pfn),
>>>>                                 low_pfn);
>>>
>>> The last pfn of a section is already section aligned, so I think no need to call pageblock_end_pfn(), just like your original change is okay to me.
>>> block_start_pfn = max(next_pfn, low_pfn);
>>>
>>>
>> Um, if we keep "block_start_pfn = max(next_pfn, low_pfn);", should we also keep
>> returning end of section "section_nr_to_pfn(start_nr + 1);" instead of original last
>> pfn of the section "section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;" which seems
>> not aligned.
>> Assume SECTION_SIZE_BITS = 27, PAGE_SHIFT = 12, pageblock order = 10
>> Last pfn of the section 0 is 0x7fff, end pfn of section 0 is 0x8000. The last pfn
>> is not aligned.
>> Please tell me if I misunderstand anything. Thanks!
> 
> Ah, you are right, sorry for my bad arithmetic. Maybe we should return the end pfn (section_nr_to_pfn(start_nr) + PAGES_PER_SECTION) of the section in skip_offline_sections_reverse() with adding some comments to explain the return value like David suggested. Then we can remove the pageblock_end_pfn() in isolate_freepages().
> 
> 
Sure, I will add comments in next version. As (section_nr_to_pfn(start_nr) + PAGES_PER_SECTION)
is = section_nr_to_pfn(start_nr + 1), I will keep the change to skip_offline_sections_reverse
if it does not bother you.


-- 
Best wishes
Kemeng Shi


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-08-01 12:33                   ` Kemeng Shi
@ 2023-08-02  1:11                     ` Baolin Wang
  2023-08-02  1:26                       ` Kemeng Shi
  0 siblings, 1 reply; 36+ messages in thread
From: Baolin Wang @ 2023-08-02  1:11 UTC (permalink / raw)
  To: Kemeng Shi, akpm, linux-mm, linux-kernel, mgorman, willy, david



On 8/1/2023 8:33 PM, Kemeng Shi wrote:
> 
> 
> on 8/1/2023 5:32 PM, Baolin Wang wrote:
>>
>>
>> On 8/1/2023 4:42 PM, Kemeng Shi wrote:
>>>
>>>
>>> on 8/1/2023 4:01 PM, Baolin Wang wrote:
>>>>
>>>>
>>>> On 8/1/2023 2:08 PM, Kemeng Shi wrote:
>>>>>
>>>>>
>>>>> on 8/1/2023 11:53 AM, Baolin Wang wrote:
>>>>>>
>>>>>>
>>>>>> On 8/1/2023 10:36 AM, Kemeng Shi wrote:
>>>>>>>
>>>>>>>
>>>>>>> on 8/1/2023 10:18 AM, Kemeng Shi wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> on 7/31/2023 8:01 PM, Baolin Wang wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>>>>>>>>>> skip_offline_sections_reverse will return the last pfn in found online
>>>>>>>>>> section. Then we set block_start_pfn to start of page block which
>>>>>>>>>> contains the last pfn in section. Then we continue, move one page
>>>>>>>>>> block forward and ignore the last page block in the online section.
>>>>>>>>>> Make block_start_pfn point to first page block after online section to fix
>>>>>>>>>> this:
>>>>>>>>>> 1. make skip_offline_sections_reverse return end pfn of online section,
>>>>>>>>>> i.e. pfn of page block after online section.
>>>>>>>>>> 2. assign block_start_pfn with next_pfn.
>>>>>>>>>>
>>>>>>>>>> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
>>>>>>>>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>>>>>>>>>> ---
>>>>>>>>>>       mm/compaction.c | 5 ++---
>>>>>>>>>>       1 file changed, 2 insertions(+), 3 deletions(-)
>>>>>>>>>>
>>>>>>>>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>>>>>>>>> index 9b7a0a69e19f..ce7841363b12 100644
>>>>>>>>>> --- a/mm/compaction.c
>>>>>>>>>> +++ b/mm/compaction.c
>>>>>>>>>> @@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>>>>>>>>>>             while (start_nr-- > 0) {
>>>>>>>>>>               if (online_section_nr(start_nr))
>>>>>>>>>> -            return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
>>>>>>>>>> +            return section_nr_to_pfn(start_nr + 1);
>>>>>>>>>
>>>>>>>>> This is incorrect, you returned the start pfn of this section.
>>>>>>>>>
>>>>>>>>>>           }
>>>>>>>>>>             return 0;
>>>>>>>>>> @@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
>>>>>>>>>>                     next_pfn = skip_offline_sections_reverse(block_start_pfn);
>>>>>>>>>>                   if (next_pfn)
>>>>>>>>>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>>>>>>>>>> -                              low_pfn);
>>>>>>>>>> +                block_start_pfn = max(next_pfn, low_pfn);
>>>>>>>>>
>>>>>>>>> 'block_start_pfn' should be pageblock aligned. If the 'next_pfn' is not pageblock-aligned (though this is not the common case), we should skip it.
>>>>>>>>>
>>>>>>>>> But if the 'next_pfn' is pageblock-aligned, yes, the commit f63224525309 still ignores the last pageblock, which is not right. So I think it should be:
>>>>>>>>> block_start_pfn = pageblock_aligned(next_pfn) ? : pageblock_start_pfn(next_pfn);
>>>>>>>>> block_start_pfn = max(block_start_pfn, low_pfn);
>>>>>>>>>
>>>>>>>> Hi Baolin, thanks for reply! As skip_offline_sections_reverse is based
>>>>>>>> on skip_offline_sections. I make the assumption that section is pageblock
>>>>>>>> aligned based on that we use section start from skip_offline_sections as
>>>>>>>> block_start_fpn without align check.
>>>>>>>> If section size is not pageblock aligned in real world, the pageblock aligned
>>>>>>>> check should be added to skip_offline_sections and skip_offline_sections_reverse.
>>>>>>>> If no one is against this, I will fix this in next version. THanks!
>>>>>>>>
>>>>>>> More information of aligment of section. For powerpc arch, we have SECTION_SIZE_BITS
>>>>>>> with 24 while PAGE_SHIFT could be configured to 18.
>>>>>>> Pageblock order is (18 + MAX_ORDER) which coule be 28 and is > SECTION_SZIE_BITS 24,
>>>>>>
>>>>>> The maximum pageblock order is MAX_ORDER. But after thinking more, I think return the start pfn or end pfn of a section is okay, and it should be aligned to a pageblock order IIUC.
>>>>>>
>>>>> Right, I mixed up the unit.
>>>>>> So I think your change is good:
>>>>>> + block_start_pfn = max(next_pfn, low_pfn);
>>>>>>
>>>>>> But in skip_offline_sections_reverse(), we should still return the last pfn of the online section.
>>>>>>
>>>>> Sure, then we should assign block_start_pfn with following change. Is this good to you?
>>>>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>>>>> +         block_start_pfn = max(pageblock_end_pfn(next_pfn),
>>>>>                                  low_pfn);
>>>>
>>>> The last pfn of a section is already section aligned, so I think no need to call pageblock_end_pfn(), just like your original change is okay to me.
>>>> block_start_pfn = max(next_pfn, low_pfn);
>>>>
>>>>
>>> Um, if we keep "block_start_pfn = max(next_pfn, low_pfn);", should we also keep
>>> returning end of section "section_nr_to_pfn(start_nr + 1);" instead of original last
>>> pfn of the section "section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;" which seems
>>> not aligned.
>>> Assume SECTION_SIZE_BITS = 27, PAGE_SHIFT = 12, pageblock order = 10
>>> Last pfn of the section 0 is 0x7fff, end pfn of section 0 is 0x8000. The last pfn
>>> is not aligned.
>>> Please tell me if I misunderstand anything. Thanks!
>>
>> Ah, you are right, sorry for my bad arithmetic. Maybe we should return the end pfn (section_nr_to_pfn(start_nr) + PAGES_PER_SECTION) of the section in skip_offline_sections_reverse() with adding some comments to explain the return value like David suggested. Then we can remove the pageblock_end_pfn() in isolate_freepages().
>>
>>
> Sure, I will add comments in next version. As (section_nr_to_pfn(start_nr) + PAGES_PER_SECTION)
> is = section_nr_to_pfn(start_nr + 1), I will keep the change to skip_offline_sections_reverse

IMO, next section is confusing. We need return the end pfn of the 
current online section, and we usually get it by 
"section_nr_to_pfn(start_nr) + PAGES_PER_SECTION".

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections
  2023-08-02  1:11                     ` Baolin Wang
@ 2023-08-02  1:26                       ` Kemeng Shi
  0 siblings, 0 replies; 36+ messages in thread
From: Kemeng Shi @ 2023-08-02  1:26 UTC (permalink / raw)
  To: Baolin Wang, akpm, linux-mm, linux-kernel, mgorman, willy, david



on 8/2/2023 9:11 AM, Baolin Wang wrote:
> 
> 
> On 8/1/2023 8:33 PM, Kemeng Shi wrote:
>>
>>
>> on 8/1/2023 5:32 PM, Baolin Wang wrote:
>>>
>>>
>>> On 8/1/2023 4:42 PM, Kemeng Shi wrote:
>>>>
>>>>
>>>> on 8/1/2023 4:01 PM, Baolin Wang wrote:
>>>>>
>>>>>
>>>>> On 8/1/2023 2:08 PM, Kemeng Shi wrote:
>>>>>>
>>>>>>
>>>>>> on 8/1/2023 11:53 AM, Baolin Wang wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 8/1/2023 10:36 AM, Kemeng Shi wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> on 8/1/2023 10:18 AM, Kemeng Shi wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> on 7/31/2023 8:01 PM, Baolin Wang wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 7/29/2023 1:10 AM, Kemeng Shi wrote:
>>>>>>>>>>> skip_offline_sections_reverse will return the last pfn in found online
>>>>>>>>>>> section. Then we set block_start_pfn to start of page block which
>>>>>>>>>>> contains the last pfn in section. Then we continue, move one page
>>>>>>>>>>> block forward and ignore the last page block in the online section.
>>>>>>>>>>> Make block_start_pfn point to first page block after online section to fix
>>>>>>>>>>> this:
>>>>>>>>>>> 1. make skip_offline_sections_reverse return end pfn of online section,
>>>>>>>>>>> i.e. pfn of page block after online section.
>>>>>>>>>>> 2. assign block_start_pfn with next_pfn.
>>>>>>>>>>>
>>>>>>>>>>> Fixes: f63224525309 ("mm: compaction: skip the memory hole rapidly when isolating free pages")
>>>>>>>>>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
>>>>>>>>>>> ---
>>>>>>>>>>>       mm/compaction.c | 5 ++---
>>>>>>>>>>>       1 file changed, 2 insertions(+), 3 deletions(-)
>>>>>>>>>>>
>>>>>>>>>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>>>>>>>>>> index 9b7a0a69e19f..ce7841363b12 100644
>>>>>>>>>>> --- a/mm/compaction.c
>>>>>>>>>>> +++ b/mm/compaction.c
>>>>>>>>>>> @@ -259,7 +259,7 @@ static unsigned long skip_offline_sections_reverse(unsigned long start_pfn)
>>>>>>>>>>>             while (start_nr-- > 0) {
>>>>>>>>>>>               if (online_section_nr(start_nr))
>>>>>>>>>>> -            return section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;
>>>>>>>>>>> +            return section_nr_to_pfn(start_nr + 1);
>>>>>>>>>>
>>>>>>>>>> This is incorrect, you returned the start pfn of this section.
>>>>>>>>>>
>>>>>>>>>>>           }
>>>>>>>>>>>             return 0;
>>>>>>>>>>> @@ -1670,8 +1670,7 @@ static void isolate_freepages(struct compact_control *cc)
>>>>>>>>>>>                     next_pfn = skip_offline_sections_reverse(block_start_pfn);
>>>>>>>>>>>                   if (next_pfn)
>>>>>>>>>>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>>>>>>>>>>> -                              low_pfn);
>>>>>>>>>>> +                block_start_pfn = max(next_pfn, low_pfn);
>>>>>>>>>>
>>>>>>>>>> 'block_start_pfn' should be pageblock aligned. If the 'next_pfn' is not pageblock-aligned (though this is not the common case), we should skip it.
>>>>>>>>>>
>>>>>>>>>> But if the 'next_pfn' is pageblock-aligned, yes, the commit f63224525309 still ignores the last pageblock, which is not right. So I think it should be:
>>>>>>>>>> block_start_pfn = pageblock_aligned(next_pfn) ? : pageblock_start_pfn(next_pfn);
>>>>>>>>>> block_start_pfn = max(block_start_pfn, low_pfn);
>>>>>>>>>>
>>>>>>>>> Hi Baolin, thanks for reply! As skip_offline_sections_reverse is based
>>>>>>>>> on skip_offline_sections. I make the assumption that section is pageblock
>>>>>>>>> aligned based on that we use section start from skip_offline_sections as
>>>>>>>>> block_start_fpn without align check.
>>>>>>>>> If section size is not pageblock aligned in real world, the pageblock aligned
>>>>>>>>> check should be added to skip_offline_sections and skip_offline_sections_reverse.
>>>>>>>>> If no one is against this, I will fix this in next version. THanks!
>>>>>>>>>
>>>>>>>> More information of aligment of section. For powerpc arch, we have SECTION_SIZE_BITS
>>>>>>>> with 24 while PAGE_SHIFT could be configured to 18.
>>>>>>>> Pageblock order is (18 + MAX_ORDER) which coule be 28 and is > SECTION_SZIE_BITS 24,
>>>>>>>
>>>>>>> The maximum pageblock order is MAX_ORDER. But after thinking more, I think return the start pfn or end pfn of a section is okay, and it should be aligned to a pageblock order IIUC.
>>>>>>>
>>>>>> Right, I mixed up the unit.
>>>>>>> So I think your change is good:
>>>>>>> + block_start_pfn = max(next_pfn, low_pfn);
>>>>>>>
>>>>>>> But in skip_offline_sections_reverse(), we should still return the last pfn of the online section.
>>>>>>>
>>>>>> Sure, then we should assign block_start_pfn with following change. Is this good to you?
>>>>>> -                block_start_pfn = max(pageblock_start_pfn(next_pfn),
>>>>>> +         block_start_pfn = max(pageblock_end_pfn(next_pfn),
>>>>>>                                  low_pfn);
>>>>>
>>>>> The last pfn of a section is already section aligned, so I think no need to call pageblock_end_pfn(), just like your original change is okay to me.
>>>>> block_start_pfn = max(next_pfn, low_pfn);
>>>>>
>>>>>
>>>> Um, if we keep "block_start_pfn = max(next_pfn, low_pfn);", should we also keep
>>>> returning end of section "section_nr_to_pfn(start_nr + 1);" instead of original last
>>>> pfn of the section "section_nr_to_pfn(start_nr) + PAGES_PER_SECTION - 1;" which seems
>>>> not aligned.
>>>> Assume SECTION_SIZE_BITS = 27, PAGE_SHIFT = 12, pageblock order = 10
>>>> Last pfn of the section 0 is 0x7fff, end pfn of section 0 is 0x8000. The last pfn
>>>> is not aligned.
>>>> Please tell me if I misunderstand anything. Thanks!
>>>
>>> Ah, you are right, sorry for my bad arithmetic. Maybe we should return the end pfn (section_nr_to_pfn(start_nr) + PAGES_PER_SECTION) of the section in skip_offline_sections_reverse() with adding some comments to explain the return value like David suggested. Then we can remove the pageblock_end_pfn() in isolate_freepages().
>>>
>>>
>> Sure, I will add comments in next version. As (section_nr_to_pfn(start_nr) + PAGES_PER_SECTION)
>> is = section_nr_to_pfn(start_nr + 1), I will keep the change to skip_offline_sections_reverse
> 
> IMO, next section is confusing. We need return the end pfn of the current online section, and we usually get it by "section_nr_to_pfn(start_nr) + PAGES_PER_SECTION".
> 
Thanks for the reply! I will do this in next version.
-- 
Best wishes
Kemeng Shi


^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2023-08-02  1:27 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-28 17:10 [PATCH 0/8] Fixes and cleanups to compaction Kemeng Shi
2023-07-28 17:10 ` [PATCH 1/8] mm/compaction: avoid missing last page block in section after skip offline sections Kemeng Shi
2023-07-28 10:41   ` David Hildenbrand
2023-07-29  2:23     ` Kemeng Shi
2023-07-31 12:01   ` Baolin Wang
2023-08-01  2:18     ` Kemeng Shi
2023-08-01  2:36       ` Kemeng Shi
2023-08-01  3:53         ` Baolin Wang
2023-08-01  6:08           ` Kemeng Shi
2023-08-01  8:01             ` Baolin Wang
2023-08-01  8:42               ` Kemeng Shi
2023-08-01  9:32                 ` Baolin Wang
2023-08-01 12:33                   ` Kemeng Shi
2023-08-02  1:11                     ` Baolin Wang
2023-08-02  1:26                       ` Kemeng Shi
2023-07-28 17:10 ` [PATCH 2/8] mm/compaction: correct last_migrated_pfn update in compact_zone Kemeng Shi
2023-08-01  2:09   ` Baolin Wang
2023-08-01  2:19     ` Kemeng Shi
2023-07-28 17:10 ` [PATCH 3/8] mm/compaction: skip page block marked skip in isolate_migratepages_block Kemeng Shi
2023-08-01  2:35   ` Baolin Wang
2023-07-28 17:10 ` [PATCH 4/8] mm/compaction: remove stale fast_find_block flag in isolate_migratepages Kemeng Shi
2023-08-01  2:42   ` Baolin Wang
2023-08-01  3:24     ` Kemeng Shi
2023-08-01  3:34       ` Baolin Wang
2023-08-01  3:48         ` Kemeng Shi
2023-08-01  8:15           ` Baolin Wang
2023-07-28 17:10 ` [PATCH 5/8] mm/compaction: corret comment of cached migrate pfn update Kemeng Shi
2023-07-28 10:35   ` David Hildenbrand
2023-08-01  2:45   ` Baolin Wang
2023-07-28 17:10 ` [PATCH 6/8] mm/compaction: correct comment to complete migration failure Kemeng Shi
2023-07-28 17:10 ` [PATCH 7/8] mm/compaction: remove unnecessary return for void function Kemeng Shi
2023-07-28 10:30   ` David Hildenbrand
2023-08-01  2:53   ` Baolin Wang
2023-07-28 17:10 ` [PATCH 8/8] mm/compaction: only set skip flag if cc->no_set_skip_hint is false Kemeng Shi
2023-07-28 10:33   ` David Hildenbrand
2023-08-01  3:02   ` Baolin Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.