linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] mm/vmscan: reclaim only affects managed_zones
@ 2022-03-27  2:41 Wei Yang
  2022-03-27  2:41 ` [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone Wei Yang
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Wei Yang @ 2022-03-27  2:41 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, ying.huang, mgorman, Wei Yang

As mentioned in commit 6aa303defb74 ("mm, vmscan: only allocate and
reclaim from zones with pages managed by the buddy allocator") , reclaim
only affects managed_zones.

Let's adjust the code and comment accordingly.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/vmscan.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7ad54b770bb1..89745cf34386 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1031,7 +1031,7 @@ static bool skip_throttle_noprogress(pg_data_t *pgdat)
 	for (i = 0; i < MAX_NR_ZONES; i++) {
 		struct zone *zone = pgdat->node_zones + i;
 
-		if (!populated_zone(zone))
+		if (!managed_zone(zone))
 			continue;
 
 		reclaimable += zone_reclaimable_pages(zone);
@@ -3912,7 +3912,7 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
 	}
 
 	/*
-	 * If a node has no populated zone within highest_zoneidx, it does not
+	 * If a node has no managed zone within highest_zoneidx, it does not
 	 * need balancing by definition. This can happen if a zone-restricted
 	 * allocation tries to wake a remote kswapd.
 	 */
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone
  2022-03-27  2:41 [PATCH 1/2] mm/vmscan: reclaim only affects managed_zones Wei Yang
@ 2022-03-27  2:41 ` Wei Yang
  2022-03-28  1:08   ` Huang, Ying
  2022-03-28  7:11 ` [PATCH 1/2] mm/vmscan: reclaim only affects managed_zones Miaohe Lin
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 17+ messages in thread
From: Wei Yang @ 2022-03-27  2:41 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, ying.huang, mgorman, Wei Yang

wakeup_kswapd() only wake up kswapd when the zone is managed.

For two callers of wakeup_kswapd(), they are node perspective.

  * wake_all_kswapds
  * numamigrate_isolate_page

If we picked up a !managed zone, this is not we expected.

This patch makes sure we pick up a managed zone for wakeup_kswapd().

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/migrate.c    | 2 +-
 mm/page_alloc.c | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 3d60823afd2d..c4b654c0bdf0 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
 		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
 			return 0;
 		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
-			if (populated_zone(pgdat->node_zones + z))
+			if (managed_zone(pgdat->node_zones + z))
 				break;
 		}
 		wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4c0c4ef94ba0..6656c2d06e01 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4674,6 +4674,8 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask,
 
 	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, highest_zoneidx,
 					ac->nodemask) {
+		if (!managed_zone(zone))
+			continue;
 		if (last_pgdat != zone->zone_pgdat)
 			wakeup_kswapd(zone, gfp_mask, order, highest_zoneidx);
 		last_pgdat = zone->zone_pgdat;
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone
  2022-03-27  2:41 ` [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone Wei Yang
@ 2022-03-28  1:08   ` Huang, Ying
  2022-03-28  7:23     ` Miaohe Lin
  2022-03-29  0:41     ` Wei Yang
  0 siblings, 2 replies; 17+ messages in thread
From: Huang, Ying @ 2022-03-28  1:08 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm, linux-kernel, mgorman

Hi, Wei,

Wei Yang <richard.weiyang@gmail.com> writes:

> wakeup_kswapd() only wake up kswapd when the zone is managed.
>
> For two callers of wakeup_kswapd(), they are node perspective.
>
>   * wake_all_kswapds
>   * numamigrate_isolate_page
>
> If we picked up a !managed zone, this is not we expected.
>
> This patch makes sure we pick up a managed zone for wakeup_kswapd().
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  mm/migrate.c    | 2 +-
>  mm/page_alloc.c | 2 ++
>  2 files changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 3d60823afd2d..c4b654c0bdf0 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
>  			return 0;
>  		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
> -			if (populated_zone(pgdat->node_zones + z))
> +			if (managed_zone(pgdat->node_zones + z))

This looks good to me!  Thanks!  It seems that we can replace
populated_zone() in migrate_balanced_pgdat() too.  Right?

>  				break;
>  		}
>  		wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 4c0c4ef94ba0..6656c2d06e01 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4674,6 +4674,8 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask,
>  
>  	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, highest_zoneidx,
>  					ac->nodemask) {
> +		if (!managed_zone(zone))
> +			continue;
>  		if (last_pgdat != zone->zone_pgdat)
>  			wakeup_kswapd(zone, gfp_mask, order, highest_zoneidx);
>  		last_pgdat = zone->zone_pgdat;

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/2] mm/vmscan: reclaim only affects managed_zones
  2022-03-27  2:41 [PATCH 1/2] mm/vmscan: reclaim only affects managed_zones Wei Yang
  2022-03-27  2:41 ` [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone Wei Yang
@ 2022-03-28  7:11 ` Miaohe Lin
  2022-03-28  7:33 ` David Hildenbrand
  2022-03-28  8:14 ` Oscar Salvador
  3 siblings, 0 replies; 17+ messages in thread
From: Miaohe Lin @ 2022-03-28  7:11 UTC (permalink / raw)
  To: Wei Yang; +Cc: linux-mm, linux-kernel, ying.huang, mgorman, Andrew Morton

On 2022/3/27 10:41, Wei Yang wrote:
> As mentioned in commit 6aa303defb74 ("mm, vmscan: only allocate and
> reclaim from zones with pages managed by the buddy allocator") , reclaim
> only affects managed_zones.
> 
> Let's adjust the code and comment accordingly.
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>

Looks good to me. Thanks.

Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>

> ---
>  mm/vmscan.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 7ad54b770bb1..89745cf34386 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1031,7 +1031,7 @@ static bool skip_throttle_noprogress(pg_data_t *pgdat)
>  	for (i = 0; i < MAX_NR_ZONES; i++) {
>  		struct zone *zone = pgdat->node_zones + i;
>  
> -		if (!populated_zone(zone))
> +		if (!managed_zone(zone))
>  			continue;
>  
>  		reclaimable += zone_reclaimable_pages(zone);
> @@ -3912,7 +3912,7 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
>  	}
>  
>  	/*
> -	 * If a node has no populated zone within highest_zoneidx, it does not
> +	 * If a node has no managed zone within highest_zoneidx, it does not
>  	 * need balancing by definition. This can happen if a zone-restricted
>  	 * allocation tries to wake a remote kswapd.
>  	 */
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone
  2022-03-28  1:08   ` Huang, Ying
@ 2022-03-28  7:23     ` Miaohe Lin
  2022-03-29  0:45       ` Wei Yang
  2022-03-29  0:41     ` Wei Yang
  1 sibling, 1 reply; 17+ messages in thread
From: Miaohe Lin @ 2022-03-28  7:23 UTC (permalink / raw)
  To: Huang, Ying, Wei Yang; +Cc: akpm, linux-mm, linux-kernel, mgorman

On 2022/3/28 9:08, Huang, Ying wrote:
> Hi, Wei,
> 
> Wei Yang <richard.weiyang@gmail.com> writes:
> 
>> wakeup_kswapd() only wake up kswapd when the zone is managed.
>>
>> For two callers of wakeup_kswapd(), they are node perspective.
>>
>>   * wake_all_kswapds
>>   * numamigrate_isolate_page
>>
>> If we picked up a !managed zone, this is not we expected.
>>
>> This patch makes sure we pick up a managed zone for wakeup_kswapd().
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  mm/migrate.c    | 2 +-
>>  mm/page_alloc.c | 2 ++
>>  2 files changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 3d60823afd2d..c4b654c0bdf0 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>  		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
>>  			return 0;
>>  		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
>> -			if (populated_zone(pgdat->node_zones + z))
>> +			if (managed_zone(pgdat->node_zones + z))
> 
> This looks good to me!  Thanks!  It seems that we can replace
> populated_zone() in migrate_balanced_pgdat() too.  Right?

This patch looks good to me too. Thanks!

BTW: This makes me remember the bewilderment when I read the relevant code.
It's very kind of you if you could tell me the difference between
managed_zone and populated_zone. IIUC, when the caller relies on the
activity from buddy system, managed_zone should always be used. I think
there're many places like compaction need to use managed_zone but
populated_zone is used now. They might need to change to use managed_zone
too. Or am I miss something?

Many Thanks. :)

> 
>>  				break;
>>  		}
>>  		wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 4c0c4ef94ba0..6656c2d06e01 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -4674,6 +4674,8 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask,
>>  
>>  	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, highest_zoneidx,
>>  					ac->nodemask) {
>> +		if (!managed_zone(zone))
>> +			continue;
>>  		if (last_pgdat != zone->zone_pgdat)
>>  			wakeup_kswapd(zone, gfp_mask, order, highest_zoneidx);
>>  		last_pgdat = zone->zone_pgdat;
> 
> Best Regards,
> Huang, Ying
> 
> .
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/2] mm/vmscan: reclaim only affects managed_zones
  2022-03-27  2:41 [PATCH 1/2] mm/vmscan: reclaim only affects managed_zones Wei Yang
  2022-03-27  2:41 ` [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone Wei Yang
  2022-03-28  7:11 ` [PATCH 1/2] mm/vmscan: reclaim only affects managed_zones Miaohe Lin
@ 2022-03-28  7:33 ` David Hildenbrand
  2022-03-28  8:14 ` Oscar Salvador
  3 siblings, 0 replies; 17+ messages in thread
From: David Hildenbrand @ 2022-03-28  7:33 UTC (permalink / raw)
  To: Wei Yang, akpm; +Cc: linux-mm, linux-kernel, ying.huang, mgorman

On 27.03.22 04:41, Wei Yang wrote:
> As mentioned in commit 6aa303defb74 ("mm, vmscan: only allocate and
> reclaim from zones with pages managed by the buddy allocator") , reclaim
> only affects managed_zones.
> 
> Let's adjust the code and comment accordingly.
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  mm/vmscan.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 7ad54b770bb1..89745cf34386 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1031,7 +1031,7 @@ static bool skip_throttle_noprogress(pg_data_t *pgdat)
>  	for (i = 0; i < MAX_NR_ZONES; i++) {
>  		struct zone *zone = pgdat->node_zones + i;
>  
> -		if (!populated_zone(zone))
> +		if (!managed_zone(zone))
>  			continue;
>  
>  		reclaimable += zone_reclaimable_pages(zone);
> @@ -3912,7 +3912,7 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
>  	}
>  
>  	/*
> -	 * If a node has no populated zone within highest_zoneidx, it does not
> +	 * If a node has no managed zone within highest_zoneidx, it does not
>  	 * need balancing by definition. This can happen if a zone-restricted
>  	 * allocation tries to wake a remote kswapd.
>  	 */

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/2] mm/vmscan: reclaim only affects managed_zones
  2022-03-27  2:41 [PATCH 1/2] mm/vmscan: reclaim only affects managed_zones Wei Yang
                   ` (2 preceding siblings ...)
  2022-03-28  7:33 ` David Hildenbrand
@ 2022-03-28  8:14 ` Oscar Salvador
  2022-03-29  0:48   ` Wei Yang
  3 siblings, 1 reply; 17+ messages in thread
From: Oscar Salvador @ 2022-03-28  8:14 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm, linux-kernel, ying.huang, mgorman

On Sun, Mar 27, 2022 at 02:41:00AM +0000, Wei Yang wrote:
> As mentioned in commit 6aa303defb74 ("mm, vmscan: only allocate and
> reclaim from zones with pages managed by the buddy allocator") , reclaim
> only affects managed_zones.
> 
> Let's adjust the code and comment accordingly.
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>

LGTM,

Reviewed-by: Oscar Salvador <osalvador@suse.de>

We still have some other places scattered all over where we use
populated_zone().
I think it should be great to check whether all those usages are
right.


-- 
Oscar Salvador
SUSE Labs

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone
  2022-03-28  1:08   ` Huang, Ying
  2022-03-28  7:23     ` Miaohe Lin
@ 2022-03-29  0:41     ` Wei Yang
  2022-03-29  0:43       ` Huang, Ying
  1 sibling, 1 reply; 17+ messages in thread
From: Wei Yang @ 2022-03-29  0:41 UTC (permalink / raw)
  To: Huang, Ying; +Cc: Wei Yang, akpm, linux-mm, linux-kernel, mgorman

On Mon, Mar 28, 2022 at 09:08:34AM +0800, Huang, Ying wrote:
>Hi, Wei,
>
>Wei Yang <richard.weiyang@gmail.com> writes:
>
>> wakeup_kswapd() only wake up kswapd when the zone is managed.
>>
>> For two callers of wakeup_kswapd(), they are node perspective.
>>
>>   * wake_all_kswapds
>>   * numamigrate_isolate_page
>>
>> If we picked up a !managed zone, this is not we expected.
>>
>> This patch makes sure we pick up a managed zone for wakeup_kswapd().
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  mm/migrate.c    | 2 +-
>>  mm/page_alloc.c | 2 ++
>>  2 files changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 3d60823afd2d..c4b654c0bdf0 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>  		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
>>  			return 0;
>>  		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
>> -			if (populated_zone(pgdat->node_zones + z))
>> +			if (managed_zone(pgdat->node_zones + z))
>
>This looks good to me!  Thanks!  It seems that we can replace
>populated_zone() in migrate_balanced_pgdat() too.  Right?
>

Yes, you are right. I didn't spot this.

While this patch comes from the clue of wakeup_kswapd(), I am not sure it is
nice to put it in this patch together.

Which way you prefer to include this: merge the change into this one, or a
separate one?

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone
  2022-03-29  0:41     ` Wei Yang
@ 2022-03-29  0:43       ` Huang, Ying
  2022-03-29  1:52         ` Wei Yang
  0 siblings, 1 reply; 17+ messages in thread
From: Huang, Ying @ 2022-03-29  0:43 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm, linux-kernel, mgorman

Wei Yang <richard.weiyang@gmail.com> writes:

> On Mon, Mar 28, 2022 at 09:08:34AM +0800, Huang, Ying wrote:
>>Hi, Wei,
>>
>>Wei Yang <richard.weiyang@gmail.com> writes:
>>
>>> wakeup_kswapd() only wake up kswapd when the zone is managed.
>>>
>>> For two callers of wakeup_kswapd(), they are node perspective.
>>>
>>>   * wake_all_kswapds
>>>   * numamigrate_isolate_page
>>>
>>> If we picked up a !managed zone, this is not we expected.
>>>
>>> This patch makes sure we pick up a managed zone for wakeup_kswapd().
>>>
>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>>> ---
>>>  mm/migrate.c    | 2 +-
>>>  mm/page_alloc.c | 2 ++
>>>  2 files changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>> index 3d60823afd2d..c4b654c0bdf0 100644
>>> --- a/mm/migrate.c
>>> +++ b/mm/migrate.c
>>> @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>>  		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
>>>  			return 0;
>>>  		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
>>> -			if (populated_zone(pgdat->node_zones + z))
>>> +			if (managed_zone(pgdat->node_zones + z))
>>
>>This looks good to me!  Thanks!  It seems that we can replace
>>populated_zone() in migrate_balanced_pgdat() too.  Right?
>>
>
> Yes, you are right. I didn't spot this.
>
> While this patch comes from the clue of wakeup_kswapd(), I am not sure it is
> nice to put it in this patch together.
>
> Which way you prefer to include this: merge the change into this one, or a
> separate one?

Either is OK for me.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone
  2022-03-28  7:23     ` Miaohe Lin
@ 2022-03-29  0:45       ` Wei Yang
  2022-03-29  1:55         ` Miaohe Lin
  0 siblings, 1 reply; 17+ messages in thread
From: Wei Yang @ 2022-03-29  0:45 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Huang, Ying, Wei Yang, akpm, linux-mm, linux-kernel, mgorman

On Mon, Mar 28, 2022 at 03:23:49PM +0800, Miaohe Lin wrote:
>On 2022/3/28 9:08, Huang, Ying wrote:
>> Hi, Wei,
>> 
>> Wei Yang <richard.weiyang@gmail.com> writes:
>> 
>>> wakeup_kswapd() only wake up kswapd when the zone is managed.
>>>
>>> For two callers of wakeup_kswapd(), they are node perspective.
>>>
>>>   * wake_all_kswapds
>>>   * numamigrate_isolate_page
>>>
>>> If we picked up a !managed zone, this is not we expected.
>>>
>>> This patch makes sure we pick up a managed zone for wakeup_kswapd().
>>>
>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>>> ---
>>>  mm/migrate.c    | 2 +-
>>>  mm/page_alloc.c | 2 ++
>>>  2 files changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>> index 3d60823afd2d..c4b654c0bdf0 100644
>>> --- a/mm/migrate.c
>>> +++ b/mm/migrate.c
>>> @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>>  		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
>>>  			return 0;
>>>  		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
>>> -			if (populated_zone(pgdat->node_zones + z))
>>> +			if (managed_zone(pgdat->node_zones + z))
>> 
>> This looks good to me!  Thanks!  It seems that we can replace
>> populated_zone() in migrate_balanced_pgdat() too.  Right?
>
>This patch looks good to me too. Thanks!
>
>BTW: This makes me remember the bewilderment when I read the relevant code.
>It's very kind of you if you could tell me the difference between
>managed_zone and populated_zone. IIUC, when the caller relies on the

The difference is managed_zone means the zone has pages managed by buddy,
while populated_zone means the zone has pages but may be reserved.

>activity from buddy system, managed_zone should always be used. I think
>there're many places like compaction need to use managed_zone but
>populated_zone is used now. They might need to change to use managed_zone
>too. Or am I miss something?

This thread comes from the read of commit 6aa303defb74, which adjust the
vmscan code. It looks like there is some mis-use in compaction, but I didn't
get time to go through it.

>
>Many Thanks. :)

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/2] mm/vmscan: reclaim only affects managed_zones
  2022-03-28  8:14 ` Oscar Salvador
@ 2022-03-29  0:48   ` Wei Yang
  0 siblings, 0 replies; 17+ messages in thread
From: Wei Yang @ 2022-03-29  0:48 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Wei Yang, akpm, linux-mm, linux-kernel, ying.huang, mgorman

On Mon, Mar 28, 2022 at 10:14:10AM +0200, Oscar Salvador wrote:
>On Sun, Mar 27, 2022 at 02:41:00AM +0000, Wei Yang wrote:
>> As mentioned in commit 6aa303defb74 ("mm, vmscan: only allocate and
>> reclaim from zones with pages managed by the buddy allocator") , reclaim
>> only affects managed_zones.
>> 
>> Let's adjust the code and comment accordingly.
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>
>LGTM,
>
>Reviewed-by: Oscar Salvador <osalvador@suse.de>
>
>We still have some other places scattered all over where we use
>populated_zone().
>I think it should be great to check whether all those usages are
>right.
>

Thanks.

This time I have checked vmscan related places, it looks all related part are
fixed. For others, I didn't get a chance to catch them. 

>
>-- 
>Oscar Salvador
>SUSE Labs

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone
  2022-03-29  0:43       ` Huang, Ying
@ 2022-03-29  1:52         ` Wei Yang
  2022-03-29  2:05           ` Huang, Ying
  2022-03-29  2:22           ` Matthew Wilcox
  0 siblings, 2 replies; 17+ messages in thread
From: Wei Yang @ 2022-03-29  1:52 UTC (permalink / raw)
  To: Huang, Ying; +Cc: Wei Yang, akpm, linux-mm, linux-kernel, mgorman

On Tue, Mar 29, 2022 at 08:43:23AM +0800, Huang, Ying wrote:
[...]
>>>> --- a/mm/migrate.c
>>>> +++ b/mm/migrate.c
>>>> @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>>>  		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
>>>>  			return 0;
>>>>  		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
>>>> -			if (populated_zone(pgdat->node_zones + z))
>>>> +			if (managed_zone(pgdat->node_zones + z))
>>>
>>>This looks good to me!  Thanks!  It seems that we can replace
>>>populated_zone() in migrate_balanced_pgdat() too.  Right?
>>>
>>
>> Yes, you are right. I didn't spot this.
>>
>> While this patch comes from the clue of wakeup_kswapd(), I am not sure it is
>> nice to put it in this patch together.
>>
>> Which way you prefer to include this: merge the change into this one, or a
>> separate one?
>
>Either is OK for me.
>

After reading the code, I am willing to do a little simplification. Does this
look good to you?

From 85c8a5cd708ada3e9f5b0409413407b7be1bc446 Mon Sep 17 00:00:00 2001
From: Wei Yang <richard.weiyang@gmail.com>
Date: Tue, 29 Mar 2022 09:24:36 +0800
Subject: [PATCH] mm/migrate.c: return valid zone for wakeup_kswapd from
 migrate_balanced_pgdat()

To wakeup kswapd, we need to iterate pgdat->node_zones and get the
proper zone. While this work has already been done in
migrate_balanced_pgdat().

Let's return the valid zone directly instead of do the iteration again.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/migrate.c | 21 ++++++++-------------
 1 file changed, 8 insertions(+), 13 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 5adc55b5347c..b086bd781956 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1973,7 +1973,7 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages,
  * Returns true if this is a safe migration target node for misplaced NUMA
  * pages. Currently it only checks the watermarks which is crude.
  */
-static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
+static struct zone *migrate_balanced_pgdat(struct pglist_data *pgdat,
 				   unsigned long nr_migrate_pages)
 {
 	int z;
@@ -1985,14 +1985,13 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
 			continue;
 
 		/* Avoid waking kswapd by allocating pages_to_migrate pages. */
-		if (!zone_watermark_ok(zone, 0,
+		if (zone_watermark_ok(zone, 0,
 				       high_wmark_pages(zone) +
 				       nr_migrate_pages,
 				       ZONE_MOVABLE, 0))
-			continue;
-		return true;
+			return zone;
 	}
-	return false;
+	return NULL;
 }
 
 static struct page *alloc_misplaced_dst_page(struct page *page,
@@ -2032,6 +2031,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
 	int page_lru;
 	int nr_pages = thp_nr_pages(page);
 	int order = compound_order(page);
+	struct zone *zone;
 
 	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
 
@@ -2040,16 +2040,11 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
 		return 0;
 
 	/* Avoid migrating to a node that is nearly full */
-	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
-		int z;
-
+	if ((zone = migrate_balanced_pgdat(pgdat, nr_pages))) {
 		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
 			return 0;
-		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
-			if (managed_zone(pgdat->node_zones + z))
-				break;
-		}
-		wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
+
+		wakeup_kswapd(zone, 0, order, ZONE_MOVABLE);
 		return 0;
 	}
 
-- 
2.33.1


>Best Regards,
>Huang, Ying

-- 
Wei Yang
Help you, Help me

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone
  2022-03-29  0:45       ` Wei Yang
@ 2022-03-29  1:55         ` Miaohe Lin
  0 siblings, 0 replies; 17+ messages in thread
From: Miaohe Lin @ 2022-03-29  1:55 UTC (permalink / raw)
  To: Wei Yang; +Cc: Huang, Ying, akpm, linux-mm, linux-kernel, mgorman

On 2022/3/29 8:45, Wei Yang wrote:
> On Mon, Mar 28, 2022 at 03:23:49PM +0800, Miaohe Lin wrote:
>> On 2022/3/28 9:08, Huang, Ying wrote:
>>> Hi, Wei,
>>>
>>> Wei Yang <richard.weiyang@gmail.com> writes:
>>>
>>>> wakeup_kswapd() only wake up kswapd when the zone is managed.
>>>>
>>>> For two callers of wakeup_kswapd(), they are node perspective.
>>>>
>>>>   * wake_all_kswapds
>>>>   * numamigrate_isolate_page
>>>>
>>>> If we picked up a !managed zone, this is not we expected.
>>>>
>>>> This patch makes sure we pick up a managed zone for wakeup_kswapd().
>>>>
>>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>>>> ---
>>>>  mm/migrate.c    | 2 +-
>>>>  mm/page_alloc.c | 2 ++
>>>>  2 files changed, 3 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>>> index 3d60823afd2d..c4b654c0bdf0 100644
>>>> --- a/mm/migrate.c
>>>> +++ b/mm/migrate.c
>>>> @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>>>  		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
>>>>  			return 0;
>>>>  		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
>>>> -			if (populated_zone(pgdat->node_zones + z))
>>>> +			if (managed_zone(pgdat->node_zones + z))
>>>
>>> This looks good to me!  Thanks!  It seems that we can replace
>>> populated_zone() in migrate_balanced_pgdat() too.  Right?
>>
>> This patch looks good to me too. Thanks!
>>
>> BTW: This makes me remember the bewilderment when I read the relevant code.
>> It's very kind of you if you could tell me the difference between
>> managed_zone and populated_zone. IIUC, when the caller relies on the
> 
> The difference is managed_zone means the zone has pages managed by buddy,
> while populated_zone means the zone has pages but may be reserved.

That's just what I understand. Thanks. :)

> 
>> activity from buddy system, managed_zone should always be used. I think
>> there're many places like compaction need to use managed_zone but
>> populated_zone is used now. They might need to change to use managed_zone
>> too. Or am I miss something?
> 
> This thread comes from the read of commit 6aa303defb74, which adjust the
> vmscan code. It looks like there is some mis-use in compaction, but I didn't
> get time to go through it.

I see. Thanks for the work.

> 
>>
>> Many Thanks. :)
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone
  2022-03-29  1:52         ` Wei Yang
@ 2022-03-29  2:05           ` Huang, Ying
  2022-03-30  0:14             ` Wei Yang
  2022-03-29  2:22           ` Matthew Wilcox
  1 sibling, 1 reply; 17+ messages in thread
From: Huang, Ying @ 2022-03-29  2:05 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm, linux-kernel, mgorman

Wei Yang <richard.weiyang@gmail.com> writes:

> On Tue, Mar 29, 2022 at 08:43:23AM +0800, Huang, Ying wrote:
> [...]
>>>>> --- a/mm/migrate.c
>>>>> +++ b/mm/migrate.c
>>>>> @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>>>>  		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
>>>>>  			return 0;
>>>>>  		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
>>>>> -			if (populated_zone(pgdat->node_zones + z))
>>>>> +			if (managed_zone(pgdat->node_zones + z))
>>>>
>>>>This looks good to me!  Thanks!  It seems that we can replace
>>>>populated_zone() in migrate_balanced_pgdat() too.  Right?
>>>>
>>>
>>> Yes, you are right. I didn't spot this.
>>>
>>> While this patch comes from the clue of wakeup_kswapd(), I am not sure it is
>>> nice to put it in this patch together.
>>>
>>> Which way you prefer to include this: merge the change into this one, or a
>>> separate one?
>>
>>Either is OK for me.
>>
>
> After reading the code, I am willing to do a little simplification. Does this
> look good to you?
>
> From 85c8a5cd708ada3e9f5b0409413407b7be1bc446 Mon Sep 17 00:00:00 2001
> From: Wei Yang <richard.weiyang@gmail.com>
> Date: Tue, 29 Mar 2022 09:24:36 +0800
> Subject: [PATCH] mm/migrate.c: return valid zone for wakeup_kswapd from
>  migrate_balanced_pgdat()
>
> To wakeup kswapd, we need to iterate pgdat->node_zones and get the
> proper zone. While this work has already been done in
> migrate_balanced_pgdat().
>
> Let's return the valid zone directly instead of do the iteration again.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  mm/migrate.c | 21 ++++++++-------------
>  1 file changed, 8 insertions(+), 13 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 5adc55b5347c..b086bd781956 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1973,7 +1973,7 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages,
>   * Returns true if this is a safe migration target node for misplaced NUMA
>   * pages. Currently it only checks the watermarks which is crude.
>   */
> -static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
> +static struct zone *migrate_balanced_pgdat(struct pglist_data *pgdat,
>  				   unsigned long nr_migrate_pages)
>  {
>  	int z;
> @@ -1985,14 +1985,13 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
>  			continue;
>  
>  		/* Avoid waking kswapd by allocating pages_to_migrate pages. */
> -		if (!zone_watermark_ok(zone, 0,
> +		if (zone_watermark_ok(zone, 0,
>  				       high_wmark_pages(zone) +
>  				       nr_migrate_pages,
>  				       ZONE_MOVABLE, 0))
> -			continue;
> -		return true;
> +			return zone;
>  	}
> -	return false;
> +	return NULL;
>  }
>  
>  static struct page *alloc_misplaced_dst_page(struct page *page,
> @@ -2032,6 +2031,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  	int page_lru;
>  	int nr_pages = thp_nr_pages(page);
>  	int order = compound_order(page);
> +	struct zone *zone;
>  
>  	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>  
> @@ -2040,16 +2040,11 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  		return 0;
>  
>  	/* Avoid migrating to a node that is nearly full */
> -	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
> -		int z;
> -
> +	if ((zone = migrate_balanced_pgdat(pgdat, nr_pages))) {

I think that this reverses the original semantics.  Originally, we give
up and wake up kswapd if there's no enough free pages on the target
node.  But now, you give up and wake up if there's enough free pages.

Best Regards,
Huang, Ying

>  		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
>  			return 0;
> -		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
> -			if (managed_zone(pgdat->node_zones + z))
> -				break;
> -		}
> -		wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
> +
> +		wakeup_kswapd(zone, 0, order, ZONE_MOVABLE);
>  		return 0;
>  	}
>  
> -- 
>
> 2.33.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone
  2022-03-29  1:52         ` Wei Yang
  2022-03-29  2:05           ` Huang, Ying
@ 2022-03-29  2:22           ` Matthew Wilcox
  2022-03-29 23:59             ` Wei Yang
  1 sibling, 1 reply; 17+ messages in thread
From: Matthew Wilcox @ 2022-03-29  2:22 UTC (permalink / raw)
  To: Wei Yang; +Cc: Huang, Ying, akpm, linux-mm, linux-kernel, mgorman

On Tue, Mar 29, 2022 at 01:52:30AM +0000, Wei Yang wrote:
> @@ -1985,14 +1985,13 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
>  			continue;
>  
>  		/* Avoid waking kswapd by allocating pages_to_migrate pages. */
> -		if (!zone_watermark_ok(zone, 0,
> +		if (zone_watermark_ok(zone, 0,
>  				       high_wmark_pages(zone) +
>  				       nr_migrate_pages,
>  				       ZONE_MOVABLE, 0))

Someone's done the silly thing of lining up all of these with spaces,
so either all these lines also need to be shrunk by one space, or you
need to break that convention and just go to a reasonable number of
tabs.  I'd do it like this:

		if (zone_watermark_ok(zone, 0,
				high_wmark_pages(zone) + nr_migrate_pages,
				ZONE_MOVABLE, 0))

but not everybody would.

> @@ -2040,16 +2040,11 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  		return 0;
>  
>  	/* Avoid migrating to a node that is nearly full */
> -	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
> -		int z;
> -
> +	if ((zone = migrate_balanced_pgdat(pgdat, nr_pages))) {

Linus had a rant about this style recently.  He much prefers:

	zone = migrate_balanced_pgdat(pgdat, nr_pages);
	if (zone) {

(the exception is for while loops:

	while ((zone = migrate_balanced_pgdat(pgdat, nr_pages)) != NULL)

where he wants to see the comparison against NULL instead of the awkard
double-bracket)


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone
  2022-03-29  2:22           ` Matthew Wilcox
@ 2022-03-29 23:59             ` Wei Yang
  0 siblings, 0 replies; 17+ messages in thread
From: Wei Yang @ 2022-03-29 23:59 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Wei Yang, Huang, Ying, akpm, linux-mm, linux-kernel, mgorman

On Tue, Mar 29, 2022 at 03:22:51AM +0100, Matthew Wilcox wrote:
>On Tue, Mar 29, 2022 at 01:52:30AM +0000, Wei Yang wrote:
>> @@ -1985,14 +1985,13 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
>>  			continue;
>>  
>>  		/* Avoid waking kswapd by allocating pages_to_migrate pages. */
>> -		if (!zone_watermark_ok(zone, 0,
>> +		if (zone_watermark_ok(zone, 0,
>>  				       high_wmark_pages(zone) +
>>  				       nr_migrate_pages,
>>  				       ZONE_MOVABLE, 0))
>
>Someone's done the silly thing of lining up all of these with spaces,
>so either all these lines also need to be shrunk by one space, or you
>need to break that convention and just go to a reasonable number of
>tabs.  I'd do it like this:
>
>		if (zone_watermark_ok(zone, 0,
>				high_wmark_pages(zone) + nr_migrate_pages,
>				ZONE_MOVABLE, 0))
>
>but not everybody would.
>
>> @@ -2040,16 +2040,11 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>  		return 0;
>>  
>>  	/* Avoid migrating to a node that is nearly full */
>> -	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
>> -		int z;
>> -
>> +	if ((zone = migrate_balanced_pgdat(pgdat, nr_pages))) {
>
>Linus had a rant about this style recently.  He much prefers:
>
>	zone = migrate_balanced_pgdat(pgdat, nr_pages);
>	if (zone) {
>
>(the exception is for while loops:
>
>	while ((zone = migrate_balanced_pgdat(pgdat, nr_pages)) != NULL)
>
>where he wants to see the comparison against NULL instead of the awkard
>double-bracket)

Matthew,

Thanks for your suggestion, I would change this later.

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone
  2022-03-29  2:05           ` Huang, Ying
@ 2022-03-30  0:14             ` Wei Yang
  0 siblings, 0 replies; 17+ messages in thread
From: Wei Yang @ 2022-03-30  0:14 UTC (permalink / raw)
  To: Huang, Ying; +Cc: Wei Yang, akpm, linux-mm, linux-kernel, mgorman

On Tue, Mar 29, 2022 at 10:05:20AM +0800, Huang, Ying wrote:
>Wei Yang <richard.weiyang@gmail.com> writes:
>
>> On Tue, Mar 29, 2022 at 08:43:23AM +0800, Huang, Ying wrote:
>> [...]
>>>>>> --- a/mm/migrate.c
>>>>>> +++ b/mm/migrate.c
>>>>>> @@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>>>>>  		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
>>>>>>  			return 0;
>>>>>>  		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
>>>>>> -			if (populated_zone(pgdat->node_zones + z))
>>>>>> +			if (managed_zone(pgdat->node_zones + z))
>>>>>
>>>>>This looks good to me!  Thanks!  It seems that we can replace
>>>>>populated_zone() in migrate_balanced_pgdat() too.  Right?
>>>>>
>>>>
>>>> Yes, you are right. I didn't spot this.
>>>>
>>>> While this patch comes from the clue of wakeup_kswapd(), I am not sure it is
>>>> nice to put it in this patch together.
>>>>
>>>> Which way you prefer to include this: merge the change into this one, or a
>>>> separate one?
>>>
>>>Either is OK for me.
>>>
>>
>> After reading the code, I am willing to do a little simplification. Does this
>> look good to you?
>>
>> From 85c8a5cd708ada3e9f5b0409413407b7be1bc446 Mon Sep 17 00:00:00 2001
>> From: Wei Yang <richard.weiyang@gmail.com>
>> Date: Tue, 29 Mar 2022 09:24:36 +0800
>> Subject: [PATCH] mm/migrate.c: return valid zone for wakeup_kswapd from
>>  migrate_balanced_pgdat()
>>
>> To wakeup kswapd, we need to iterate pgdat->node_zones and get the
>> proper zone. While this work has already been done in
>> migrate_balanced_pgdat().
>>
>> Let's return the valid zone directly instead of do the iteration again.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  mm/migrate.c | 21 ++++++++-------------
>>  1 file changed, 8 insertions(+), 13 deletions(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 5adc55b5347c..b086bd781956 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -1973,7 +1973,7 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages,
>>   * Returns true if this is a safe migration target node for misplaced NUMA
>>   * pages. Currently it only checks the watermarks which is crude.
>>   */
>> -static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
>> +static struct zone *migrate_balanced_pgdat(struct pglist_data *pgdat,
>>  				   unsigned long nr_migrate_pages)
>>  {
>>  	int z;
>> @@ -1985,14 +1985,13 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
>>  			continue;
>>  
>>  		/* Avoid waking kswapd by allocating pages_to_migrate pages. */
>> -		if (!zone_watermark_ok(zone, 0,
>> +		if (zone_watermark_ok(zone, 0,
>>  				       high_wmark_pages(zone) +
>>  				       nr_migrate_pages,
>>  				       ZONE_MOVABLE, 0))
>> -			continue;
>> -		return true;
>> +			return zone;
>>  	}
>> -	return false;
>> +	return NULL;
>>  }
>>  
>>  static struct page *alloc_misplaced_dst_page(struct page *page,
>> @@ -2032,6 +2031,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>  	int page_lru;
>>  	int nr_pages = thp_nr_pages(page);
>>  	int order = compound_order(page);
>> +	struct zone *zone;
>>  
>>  	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>>  
>> @@ -2040,16 +2040,11 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>  		return 0;
>>  
>>  	/* Avoid migrating to a node that is nearly full */
>> -	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
>> -		int z;
>> -
>> +	if ((zone = migrate_balanced_pgdat(pgdat, nr_pages))) {
>
>I think that this reverses the original semantics.  Originally, we give
>up and wake up kswapd if there's no enough free pages on the target
>node.  But now, you give up and wake up if there's enough free pages.
>

You are right, I misunderstand it.

Sorry


-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2022-03-30  0:14 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-27  2:41 [PATCH 1/2] mm/vmscan: reclaim only affects managed_zones Wei Yang
2022-03-27  2:41 ` [PATCH 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone Wei Yang
2022-03-28  1:08   ` Huang, Ying
2022-03-28  7:23     ` Miaohe Lin
2022-03-29  0:45       ` Wei Yang
2022-03-29  1:55         ` Miaohe Lin
2022-03-29  0:41     ` Wei Yang
2022-03-29  0:43       ` Huang, Ying
2022-03-29  1:52         ` Wei Yang
2022-03-29  2:05           ` Huang, Ying
2022-03-30  0:14             ` Wei Yang
2022-03-29  2:22           ` Matthew Wilcox
2022-03-29 23:59             ` Wei Yang
2022-03-28  7:11 ` [PATCH 1/2] mm/vmscan: reclaim only affects managed_zones Miaohe Lin
2022-03-28  7:33 ` David Hildenbrand
2022-03-28  8:14 ` Oscar Salvador
2022-03-29  0:48   ` Wei Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).