All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH -V11 2/3] NUMA balancing: optimize page placement for memory tiering system
@ 2022-01-28 15:24 kernel test robot
  0 siblings, 0 replies; 6+ messages in thread
From: kernel test robot @ 2022-01-28 15:24 UTC (permalink / raw)
  To: kbuild

[-- Attachment #1: Type: text/plain, Size: 5016 bytes --]

CC: kbuild-all(a)lists.01.org
In-Reply-To: <20220128082751.593478-3-ying.huang@intel.com>
References: <20220128082751.593478-3-ying.huang@intel.com>
TO: Huang Ying <ying.huang@intel.com>

Hi Huang,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linux/master]
[also build test WARNING on linus/master v5.17-rc1 next-20220128]
[cannot apply to tip/sched/core hnaz-mm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Huang-Ying/NUMA-balancing-optimize-memory-placement-for-memory-tiering-system/20220128-162856
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 2c271fe77d52a0555161926c232cd5bc07178b39
:::::: branch date: 7 hours ago
:::::: commit date: 7 hours ago
config: x86_64-randconfig-m001 (https://download.01.org/0day-ci/archive/20220128/202201282334.HfXqLBa5-lkp(a)intel.com/config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>

smatch warnings:
mm/vmscan.c:3987 pgdat_balanced() warn: bitwise AND condition is false here

vim +3987 mm/vmscan.c

904d2532d3f5bf3 Huang Ying      2022-01-28  3965  
e716f2eb24defb3 Mel Gorman      2017-05-03  3966  /*
e716f2eb24defb3 Mel Gorman      2017-05-03  3967   * Returns true if there is an eligible zone balanced for the request order
97a225e69a1f880 Joonsoo Kim     2020-06-03  3968   * and highest_zoneidx
e716f2eb24defb3 Mel Gorman      2017-05-03  3969   */
97a225e69a1f880 Joonsoo Kim     2020-06-03  3970  static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
60cefed485a02bd Johannes Weiner 2012-11-29  3971  {
e716f2eb24defb3 Mel Gorman      2017-05-03  3972  	int i;
e716f2eb24defb3 Mel Gorman      2017-05-03  3973  	unsigned long mark = -1;
e716f2eb24defb3 Mel Gorman      2017-05-03  3974  	struct zone *zone;
60cefed485a02bd Johannes Weiner 2012-11-29  3975  
1c30844d2dfe272 Mel Gorman      2018-12-28  3976  	/*
1c30844d2dfe272 Mel Gorman      2018-12-28  3977  	 * Check watermarks bottom-up as lower zones are more likely to
1c30844d2dfe272 Mel Gorman      2018-12-28  3978  	 * meet watermarks.
1c30844d2dfe272 Mel Gorman      2018-12-28  3979  	 */
97a225e69a1f880 Joonsoo Kim     2020-06-03  3980  	for (i = 0; i <= highest_zoneidx; i++) {
e716f2eb24defb3 Mel Gorman      2017-05-03  3981  		zone = pgdat->node_zones + i;
e716f2eb24defb3 Mel Gorman      2017-05-03  3982  
e716f2eb24defb3 Mel Gorman      2017-05-03  3983  		if (!managed_zone(zone))
e716f2eb24defb3 Mel Gorman      2017-05-03  3984  			continue;
e716f2eb24defb3 Mel Gorman      2017-05-03  3985  
e716f2eb24defb3 Mel Gorman      2017-05-03  3986  		mark = high_wmark_pages(zone);
904d2532d3f5bf3 Huang Ying      2022-01-28 @3987  		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
904d2532d3f5bf3 Huang Ying      2022-01-28  3988  		    numa_demotion_enabled &&
904d2532d3f5bf3 Huang Ying      2022-01-28  3989  		    next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) {
904d2532d3f5bf3 Huang Ying      2022-01-28  3990  			unsigned long promote_mark;
904d2532d3f5bf3 Huang Ying      2022-01-28  3991  
904d2532d3f5bf3 Huang Ying      2022-01-28  3992  			promote_mark = max(NUMA_BALANCING_PROMOTE_WATERMARK_MIN,
904d2532d3f5bf3 Huang Ying      2022-01-28  3993  				mark / NUMA_BALANCING_PROMOTE_WATERMARK_DIV);
904d2532d3f5bf3 Huang Ying      2022-01-28  3994  			mark += promote_mark;
904d2532d3f5bf3 Huang Ying      2022-01-28  3995  		}
97a225e69a1f880 Joonsoo Kim     2020-06-03  3996  		if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
e716f2eb24defb3 Mel Gorman      2017-05-03  3997  			return true;
e716f2eb24defb3 Mel Gorman      2017-05-03  3998  	}
6256c6b499a1689 Mel Gorman      2016-07-28  3999  
e716f2eb24defb3 Mel Gorman      2017-05-03  4000  	/*
97a225e69a1f880 Joonsoo Kim     2020-06-03  4001  	 * If a node has no populated zone within highest_zoneidx, it does not
e716f2eb24defb3 Mel Gorman      2017-05-03  4002  	 * need balancing by definition. This can happen if a zone-restricted
e716f2eb24defb3 Mel Gorman      2017-05-03  4003  	 * allocation tries to wake a remote kswapd.
e716f2eb24defb3 Mel Gorman      2017-05-03  4004  	 */
e716f2eb24defb3 Mel Gorman      2017-05-03  4005  	if (mark == -1)
6256c6b499a1689 Mel Gorman      2016-07-28  4006  		return true;
e716f2eb24defb3 Mel Gorman      2017-05-03  4007  
e716f2eb24defb3 Mel Gorman      2017-05-03  4008  	return false;
60cefed485a02bd Johannes Weiner 2012-11-29  4009  }
60cefed485a02bd Johannes Weiner 2012-11-29  4010  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH -V11 2/3] NUMA balancing: optimize page placement for memory tiering system
  2022-02-17 16:26       ` Johannes Weiner
@ 2022-02-18  2:15         ` Huang, Ying
  0 siblings, 0 replies; 6+ messages in thread
From: Huang, Ying @ 2022-02-18  2:15 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Peter Zijlstra, Mel Gorman, linux-mm, linux-kernel, Feng Tang,
	Baolin Wang, Andrew Morton, Michal Hocko, Rik van Riel,
	Mel Gorman, Dave Hansen, Yang Shi, Zi Yan, Wei Xu, osalvador,
	Shakeel Butt, zhongjiang-ali

Johannes Weiner <hannes@cmpxchg.org> writes:

> Hi Huang,
>
> Sorry, I didn't see this reply until you sent out the new version
> already :( Apologies.

Never mind!

> On Wed, Feb 09, 2022 at 01:24:29PM +0800, Huang, Ying wrote:
>> > On Fri, Jan 28, 2022 at 04:27:50PM +0800, Huang Ying wrote:
>> >> @@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms,
>> >>  numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms,
>> >>  numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls.
>> >>  
>> >> +Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among
>> >> +different types of memory (represented as different NUMA nodes) to
>> >> +place the hot pages in the fast memory.  This is implemented based on
>> >> +unmapping and page fault too.
>> >
>> > NORMAL | TIERING appears to be a non-sensical combination.
>> >
>> > Would it be better to have a tristate (disabled, normal, tiering)
>> > rather than a mask?
>> 
>> NORMAL is for balancing cross-socket memory accessing among DRAM nodes.
>> TIERING is for optimizing page placement between DRAM and PMEM in one
>> socket.  We think it's possible to do both.
>> 
>> For example, with [3/3] of the patchset,
>> 
>> - TIERING: because DRAM pages aren't made PROT_NONE, it's disabled to
>>   balance among DRAM nodes.
>> 
>> - NORMAL | TIERING: both cross-socket balancing among DRAM nodes and
>>   page placement optimizing between DRAM and PMEM are enabled.
>
> Ok, I get it. So NORMAL would enable PROT_NONE sampling on all nodes,
> and TIERING would additionally raise the watermarks on DRAM nodes.
>
> Thanks!
>
>> >> @@ -2034,16 +2035,30 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>> >>  {
>> >>  	int page_lru;
>> >>  	int nr_pages = thp_nr_pages(page);
>> >> +	int order = compound_order(page);
>> >>  
>> >> -	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
>> >> +	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>> >>  
>> >>  	/* Do not migrate THP mapped by multiple processes */
>> >>  	if (PageTransHuge(page) && total_mapcount(page) > 1)
>> >>  		return 0;
>> >>  
>> >>  	/* Avoid migrating to a node that is nearly full */
>> >> -	if (!migrate_balanced_pgdat(pgdat, nr_pages))
>> >> +	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
>> >> +		int z;
>> >> +
>> >> +		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) ||
>> >> +		    !numa_demotion_enabled)
>> >> +			return 0;
>> >> +		if (next_demotion_node(pgdat->node_id) == NUMA_NO_NODE)
>> >> +			return 0;
>> >
>> > The encoded behavior doesn't seem very user-friendly: Unless the user
>> > enables numa demotion in a separate flag, enabling numa balancing in
>> > tiered mode will silently do nothing.
>> 
>> In theory, TIERING still does something even with numa_demotion_enabled
>> == false.  Where it works more like the original NUMA balancing.  If
>> there's some free space in DRAM node (for example, some programs exit),
>> some PMEM pages will be promoted to DRAM.  But as in the change log,
>> this isn't good enough for page placement optimizing.
>
> Right, so it's a behavior that likely isn't going to be useful.
>
>> > Would it make more sense to have a central flag for the operation of
>> > tiered memory systems that will enable both promotion and demotion?
>> 
>> IMHO, it may be possible for people to enable demotion alone.  For
>> example, if some people want to use a user space page placement
>> optimizing solution based on PMU counters, they may disable TIERING, but
>> still use demotion as a way to avoid swapping in some situation.  Do you
>> think this makes sense?
>
> Yes, it does.
>
>> > Alternatively, it could also ignore the state of demotion and promote
>> > anyway if asked to, resulting in regular reclaim to make room. It
>> > might not be the most popular combination, but would be in line with
>> > the zone_reclaim_mode policy of preferring reclaim over remote
>> > accesses.  It would make the knobs behave more as expected and it's
>> > less convoluted than having flags select other user-visible flags.
>> 
>> Sorry, I don't get your idea here.  Do you suggest to add another knob
>> like zone_relcaim_mode?  Then we can define some bit to control demotion
>> and promotion there?  If so, I still don't know how to fit this into the
>> existing NUMA balancing framework.
>
> No, I'm just suggesting to remove the !numa_demotion_disabled check
> from the promotion path on unbalanced nodes. Keep the switches
> independent from each other.
>
> Like you said, demotion without promotion can be a valid config with a
> userspace promoter.
>
> And I'm saying promotion without demotion can be a valid config in a
> zone_reclaim_mode type of setup.
>
> We also seem to agree degraded promotion when demotion enabled likely
> isn't very useful to anybody. So maybe it should be removed?
>
> It just comes down to user expectations. There is no masterswitch that
> says "do the right thing on tiered systems", so absent of that I think
> it would be best to keep the semantics of each of the two knobs simple
> and predictable, without tricky interdependencies - like quietly
> degrading promotion behavior when demotion is disabled.
>
> Does that make sense?

Yes.  It does.  I will do that in the next version!

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH -V11 2/3] NUMA balancing: optimize page placement for memory tiering system
  2022-02-09  5:24     ` Huang, Ying
@ 2022-02-17 16:26       ` Johannes Weiner
  2022-02-18  2:15         ` Huang, Ying
  0 siblings, 1 reply; 6+ messages in thread
From: Johannes Weiner @ 2022-02-17 16:26 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Peter Zijlstra, Mel Gorman, linux-mm, linux-kernel, Feng Tang,
	Baolin Wang, Andrew Morton, Michal Hocko, Rik van Riel,
	Mel Gorman, Dave Hansen, Yang Shi, Zi Yan, Wei Xu, osalvador,
	Shakeel Butt, zhongjiang-ali

Hi Huang,

Sorry, I didn't see this reply until you sent out the new version
already :( Apologies.

On Wed, Feb 09, 2022 at 01:24:29PM +0800, Huang, Ying wrote:
> > On Fri, Jan 28, 2022 at 04:27:50PM +0800, Huang Ying wrote:
> >> @@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms,
> >>  numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms,
> >>  numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls.
> >>  
> >> +Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among
> >> +different types of memory (represented as different NUMA nodes) to
> >> +place the hot pages in the fast memory.  This is implemented based on
> >> +unmapping and page fault too.
> >
> > NORMAL | TIERING appears to be a non-sensical combination.
> >
> > Would it be better to have a tristate (disabled, normal, tiering)
> > rather than a mask?
> 
> NORMAL is for balancing cross-socket memory accessing among DRAM nodes.
> TIERING is for optimizing page placement between DRAM and PMEM in one
> socket.  We think it's possible to do both.
> 
> For example, with [3/3] of the patchset,
> 
> - TIERING: because DRAM pages aren't made PROT_NONE, it's disabled to
>   balance among DRAM nodes.
> 
> - NORMAL | TIERING: both cross-socket balancing among DRAM nodes and
>   page placement optimizing between DRAM and PMEM are enabled.

Ok, I get it. So NORMAL would enable PROT_NONE sampling on all nodes,
and TIERING would additionally raise the watermarks on DRAM nodes.

Thanks!

> >> @@ -2034,16 +2035,30 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
> >>  {
> >>  	int page_lru;
> >>  	int nr_pages = thp_nr_pages(page);
> >> +	int order = compound_order(page);
> >>  
> >> -	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
> >> +	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
> >>  
> >>  	/* Do not migrate THP mapped by multiple processes */
> >>  	if (PageTransHuge(page) && total_mapcount(page) > 1)
> >>  		return 0;
> >>  
> >>  	/* Avoid migrating to a node that is nearly full */
> >> -	if (!migrate_balanced_pgdat(pgdat, nr_pages))
> >> +	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
> >> +		int z;
> >> +
> >> +		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) ||
> >> +		    !numa_demotion_enabled)
> >> +			return 0;
> >> +		if (next_demotion_node(pgdat->node_id) == NUMA_NO_NODE)
> >> +			return 0;
> >
> > The encoded behavior doesn't seem very user-friendly: Unless the user
> > enables numa demotion in a separate flag, enabling numa balancing in
> > tiered mode will silently do nothing.
> 
> In theory, TIERING still does something even with numa_demotion_enabled
> == false.  Where it works more like the original NUMA balancing.  If
> there's some free space in DRAM node (for example, some programs exit),
> some PMEM pages will be promoted to DRAM.  But as in the change log,
> this isn't good enough for page placement optimizing.

Right, so it's a behavior that likely isn't going to be useful.

> > Would it make more sense to have a central flag for the operation of
> > tiered memory systems that will enable both promotion and demotion?
> 
> IMHO, it may be possible for people to enable demotion alone.  For
> example, if some people want to use a user space page placement
> optimizing solution based on PMU counters, they may disable TIERING, but
> still use demotion as a way to avoid swapping in some situation.  Do you
> think this makes sense?

Yes, it does.

> > Alternatively, it could also ignore the state of demotion and promote
> > anyway if asked to, resulting in regular reclaim to make room. It
> > might not be the most popular combination, but would be in line with
> > the zone_reclaim_mode policy of preferring reclaim over remote
> > accesses.  It would make the knobs behave more as expected and it's
> > less convoluted than having flags select other user-visible flags.
> 
> Sorry, I don't get your idea here.  Do you suggest to add another knob
> like zone_relcaim_mode?  Then we can define some bit to control demotion
> and promotion there?  If so, I still don't know how to fit this into the
> existing NUMA balancing framework.

No, I'm just suggesting to remove the !numa_demotion_disabled check
from the promotion path on unbalanced nodes. Keep the switches
independent from each other.

Like you said, demotion without promotion can be a valid config with a
userspace promoter.

And I'm saying promotion without demotion can be a valid config in a
zone_reclaim_mode type of setup.

We also seem to agree degraded promotion when demotion enabled likely
isn't very useful to anybody. So maybe it should be removed?

It just comes down to user expectations. There is no masterswitch that
says "do the right thing on tiered systems", so absent of that I think
it would be best to keep the semantics of each of the two knobs simple
and predictable, without tricky interdependencies - like quietly
degrading promotion behavior when demotion is disabled.

Does that make sense?

Thanks!
Johannes

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH -V11 2/3] NUMA balancing: optimize page placement for memory tiering system
  2022-02-07 17:47   ` Johannes Weiner
@ 2022-02-09  5:24     ` Huang, Ying
  2022-02-17 16:26       ` Johannes Weiner
  0 siblings, 1 reply; 6+ messages in thread
From: Huang, Ying @ 2022-02-09  5:24 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Peter Zijlstra, Mel Gorman, linux-mm, linux-kernel, Feng Tang,
	Baolin Wang, Andrew Morton, Michal Hocko, Rik van Riel,
	Mel Gorman, Dave Hansen, Yang Shi, Zi Yan, Wei Xu, osalvador,
	Shakeel Butt, zhongjiang-ali

Hi, Johannes,

Thanks a lot for your review!

Johannes Weiner <hannes@cmpxchg.org> writes:

> Hi Huang,
>
> On Fri, Jan 28, 2022 at 04:27:50PM +0800, Huang Ying wrote:
>> It's the common cases that the working-set size of the workload is
>> larger than the size of the fast memory nodes.  Otherwise, it's
>> unnecessary to use the slow memory at all.  So, there are almost
>> always no enough free pages in the fast memory nodes, so that the
>> globally hot pages in the slow memory node cannot be promoted to the
>> fast memory node.  To solve the issue, we have 2 choices as follows,
>> 
>> a. Ignore the free pages watermark checking when promoting hot pages
>>    from the slow memory node to the fast memory node.  This will
>>    create some memory pressure in the fast memory node, thus trigger
>>    the memory reclaiming.  So that, the cold pages in the fast memory
>>    node will be demoted to the slow memory node.
>> 
>> b. Make kswapd of the fast memory node to reclaim pages until the
>>    free pages are a little (for example, high_watermark / 4) more than
>>    the high watermark.  Then, if the free pages of the fast memory
>>    node reaches high watermark, and some hot pages need to be
>>    promoted, kswapd of the fast memory node will be waken up to demote
>>    more cold pages in the fast memory node to the slow memory node.
>>    This will free some extra space in the fast memory node, so the hot
>>    pages in the slow memory node can be promoted to the fast memory
>>    node.
>> 
>> The choice "a" may create high memory pressure in the fast memory
>> node.  If the memory pressure of the workload is high, the memory
>> pressure may become so high that the memory allocation latency of the
>> workload is influenced, e.g. the direct reclaiming may be triggered.
>> 
>> The choice "b" works much better at this aspect.  If the memory
>> pressure of the workload is high, the hot pages promotion will stop
>> earlier because its allocation watermark is higher than that of the
>> normal memory allocation.  So in this patch, choice "b" is
>> implemented.
>
> I agree with that choice.
>
> It's conceivable we'd eventually want a mix of both, where promotions
> boost kswapd watermarks on-demand, such that no fast memory is wasted
> if there are no references into slow memory. But that can be done when
> it proves necessary, IMO.

Sure.

>> @@ -595,16 +595,23 @@ Documentation/admin-guide/kernel-parameters.rst).
>>  numa_balancing
>>  ==============
>>  
>> -Enables/disables automatic page fault based NUMA memory
>> -balancing. Memory is moved automatically to nodes
>> -that access it often.
>> +Enables/disables and configure automatic page fault based NUMA memory
>> +balancing.  Memory is moved automatically to nodes that access it
>> +often.  The value to set can be the result to OR the following,
>>  
>> -Enables/disables automatic NUMA memory balancing. On NUMA machines, there
>> -is a performance penalty if remote memory is accessed by a CPU. When this
>> -feature is enabled the kernel samples what task thread is accessing memory
>> -by periodically unmapping pages and later trapping a page fault. At the
>> -time of the page fault, it is determined if the data being accessed should
>> -be migrated to a local memory node.
>> += =================================
>> +0x0 NUMA_BALANCING_DISABLED
>> +0x1 NUMA_BALANCING_NORMAL
>> +0x2 NUMA_BALANCING_MEMORY_TIERING
>> += =================================
>> +
>> +Or NUMA_BALANCING_NORMAL to optimize page placement among different
>> +NUMA nodes to reduce remote accessing.  On NUMA machines, there is a
>> +performance penalty if remote memory is accessed by a CPU. When this
>> +feature is enabled the kernel samples what task thread is accessing
>> +memory by periodically unmapping pages and later trapping a page
>> +fault. At the time of the page fault, it is determined if the data
>> +being accessed should be migrated to a local memory node.
>>  
>>  The unmapping of pages and trapping faults incur additional overhead that
>>  ideally is offset by improved memory locality but there is no universal
>> @@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms,
>>  numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms,
>>  numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls.
>>  
>> +Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among
>> +different types of memory (represented as different NUMA nodes) to
>> +place the hot pages in the fast memory.  This is implemented based on
>> +unmapping and page fault too.
>
> NORMAL | TIERING appears to be a non-sensical combination.
>
> Would it be better to have a tristate (disabled, normal, tiering)
> rather than a mask?

NORMAL is for balancing cross-socket memory accessing among DRAM nodes.
TIERING is for optimizing page placement between DRAM and PMEM in one
socket.  We think it's possible to do both.

For example, with [3/3] of the patchset,

- TIERING: because DRAM pages aren't made PROT_NONE, it's disabled to
  balance among DRAM nodes.

- NORMAL | TIERING: both cross-socket balancing among DRAM nodes and
  page placement optimizing between DRAM and PMEM are enabled.

>> @@ -2034,16 +2035,30 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>>  {
>>  	int page_lru;
>>  	int nr_pages = thp_nr_pages(page);
>> +	int order = compound_order(page);
>>  
>> -	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
>> +	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>>  
>>  	/* Do not migrate THP mapped by multiple processes */
>>  	if (PageTransHuge(page) && total_mapcount(page) > 1)
>>  		return 0;
>>  
>>  	/* Avoid migrating to a node that is nearly full */
>> -	if (!migrate_balanced_pgdat(pgdat, nr_pages))
>> +	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
>> +		int z;
>> +
>> +		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) ||
>> +		    !numa_demotion_enabled)
>> +			return 0;
>> +		if (next_demotion_node(pgdat->node_id) == NUMA_NO_NODE)
>> +			return 0;
>
> The encoded behavior doesn't seem very user-friendly: Unless the user
> enables numa demotion in a separate flag, enabling numa balancing in
> tiered mode will silently do nothing.

In theory, TIERING still does something even with numa_demotion_enabled
== false.  Where it works more like the original NUMA balancing.  If
there's some free space in DRAM node (for example, some programs exit),
some PMEM pages will be promoted to DRAM.  But as in the change log,
this isn't good enough for page placement optimizing.

> Would it make more sense to have a central flag for the operation of
> tiered memory systems that will enable both promotion and demotion?

IMHO, it may be possible for people to enable demotion alone.  For
example, if some people want to use a user space page placement
optimizing solution based on PMU counters, they may disable TIERING, but
still use demotion as a way to avoid swapping in some situation.  Do you
think this makes sense?

> Alternatively, it could also ignore the state of demotion and promote
> anyway if asked to, resulting in regular reclaim to make room. It
> might not be the most popular combination, but would be in line with
> the zone_reclaim_mode policy of preferring reclaim over remote
> accesses.  It would make the knobs behave more as expected and it's
> less convoluted than having flags select other user-visible flags.

Sorry, I don't get your idea here.  Do you suggest to add another knob
like zone_relcaim_mode?  Then we can define some bit to control demotion
and promotion there?  If so, I still don't know how to fit this into the
existing NUMA balancing framework.

>> @@ -3966,6 +3967,13 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx)
>>  	return false;
>>  }
>>  
>> +/*
>> + * Keep the free pages on fast memory node a little more than the high
>> + * watermark to accommodate the promoted pages.
>> + */
>> +#define NUMA_BALANCING_PROMOTE_WATERMARK_DIV	4
>> +#define NUMA_BALANCING_PROMOTE_WATERMARK_MIN	(10UL * 1024 * 1024 >> PAGE_SHIFT)
>> +
>>  /*
>>   * Returns true if there is an eligible zone balanced for the request order
>>   * and highest_zoneidx
>> @@ -3987,6 +3995,15 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
>>  			continue;
>>  
>>  		mark = high_wmark_pages(zone);
>> +		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
>> +		    numa_demotion_enabled &&
>> +		    next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) {
>> +			unsigned long promote_mark;
>> +
>> +			promote_mark = max(NUMA_BALANCING_PROMOTE_WATERMARK_MIN,
>> +				mark / NUMA_BALANCING_PROMOTE_WATERMARK_DIV);
>> +			mark += promote_mark;
>> +		}
>
> I'm not sure about this formula.
>
> The high watermark straddles both the atomic allocation pool
> (min_free_kbytes) as well as the direct reclaim latency buffer
> (watermark_scale_factor) on top of it. Making the promotion buffer a
> quarter of the raw total number conflates the two settings.
>
> E.g. if the user grows the atomic pool to support bursty network rx,
> the two direct reclaim buffers between min, low and high stay the
> same, but the tier-promotion buffer grows. This is unexpected, and not
> what the user intended.
>
> I'm thinking it would be better to do this:
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 7c417bec8207..5e067d66f797 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8573,7 +8573,8 @@ static void __setup_per_zone_wmarks(void)
>  
>  		zone->watermark_boost = 0;
>  		zone->_watermark[WMARK_LOW]  = min_wmark_pages(zone) + tmp;
> -		zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2;
> +		zone->_watermark[WMARK_HIGH] = low_wmark_pages(zone) + tmp;
> +		zone->_watermark[WMARK_PROMO] = high_wmark_pages(zone) + tmp;
>  
>  		spin_unlock_irqrestore(&zone->lock, flags);
>  	}
>
> and then have kswapd choose between the high and promo watermarks
> depending on the numa balancing mode.
>
> On my 32G desktop with default settings this produces the same
> ballpark of values as your code - high/4=7424, tmp=7311.
>
> This would behave more intuitively with the existing watermark tuning
> knobs, and I think would make the code overall simpler as well.

Yes!  This looks great!  Will do this in the next version, Thanks!

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH -V11 2/3] NUMA balancing: optimize page placement for memory tiering system
  2022-01-28  8:27 ` [PATCH -V11 2/3] NUMA balancing: optimize page " Huang Ying
@ 2022-02-07 17:47   ` Johannes Weiner
  2022-02-09  5:24     ` Huang, Ying
  0 siblings, 1 reply; 6+ messages in thread
From: Johannes Weiner @ 2022-02-07 17:47 UTC (permalink / raw)
  To: Huang Ying
  Cc: Peter Zijlstra, Mel Gorman, linux-mm, linux-kernel, Feng Tang,
	Baolin Wang, Andrew Morton, Michal Hocko, Rik van Riel,
	Mel Gorman, Dave Hansen, Yang Shi, Zi Yan, Wei Xu, osalvador,
	Shakeel Butt, zhongjiang-ali

Hi Huang,

On Fri, Jan 28, 2022 at 04:27:50PM +0800, Huang Ying wrote:
> It's the common cases that the working-set size of the workload is
> larger than the size of the fast memory nodes.  Otherwise, it's
> unnecessary to use the slow memory at all.  So, there are almost
> always no enough free pages in the fast memory nodes, so that the
> globally hot pages in the slow memory node cannot be promoted to the
> fast memory node.  To solve the issue, we have 2 choices as follows,
> 
> a. Ignore the free pages watermark checking when promoting hot pages
>    from the slow memory node to the fast memory node.  This will
>    create some memory pressure in the fast memory node, thus trigger
>    the memory reclaiming.  So that, the cold pages in the fast memory
>    node will be demoted to the slow memory node.
> 
> b. Make kswapd of the fast memory node to reclaim pages until the
>    free pages are a little (for example, high_watermark / 4) more than
>    the high watermark.  Then, if the free pages of the fast memory
>    node reaches high watermark, and some hot pages need to be
>    promoted, kswapd of the fast memory node will be waken up to demote
>    more cold pages in the fast memory node to the slow memory node.
>    This will free some extra space in the fast memory node, so the hot
>    pages in the slow memory node can be promoted to the fast memory
>    node.
> 
> The choice "a" may create high memory pressure in the fast memory
> node.  If the memory pressure of the workload is high, the memory
> pressure may become so high that the memory allocation latency of the
> workload is influenced, e.g. the direct reclaiming may be triggered.
> 
> The choice "b" works much better at this aspect.  If the memory
> pressure of the workload is high, the hot pages promotion will stop
> earlier because its allocation watermark is higher than that of the
> normal memory allocation.  So in this patch, choice "b" is
> implemented.

I agree with that choice.

It's conceivable we'd eventually want a mix of both, where promotions
boost kswapd watermarks on-demand, such that no fast memory is wasted
if there are no references into slow memory. But that can be done when
it proves necessary, IMO.

> @@ -595,16 +595,23 @@ Documentation/admin-guide/kernel-parameters.rst).
>  numa_balancing
>  ==============
>  
> -Enables/disables automatic page fault based NUMA memory
> -balancing. Memory is moved automatically to nodes
> -that access it often.
> +Enables/disables and configure automatic page fault based NUMA memory
> +balancing.  Memory is moved automatically to nodes that access it
> +often.  The value to set can be the result to OR the following,
>  
> -Enables/disables automatic NUMA memory balancing. On NUMA machines, there
> -is a performance penalty if remote memory is accessed by a CPU. When this
> -feature is enabled the kernel samples what task thread is accessing memory
> -by periodically unmapping pages and later trapping a page fault. At the
> -time of the page fault, it is determined if the data being accessed should
> -be migrated to a local memory node.
> += =================================
> +0x0 NUMA_BALANCING_DISABLED
> +0x1 NUMA_BALANCING_NORMAL
> +0x2 NUMA_BALANCING_MEMORY_TIERING
> += =================================
> +
> +Or NUMA_BALANCING_NORMAL to optimize page placement among different
> +NUMA nodes to reduce remote accessing.  On NUMA machines, there is a
> +performance penalty if remote memory is accessed by a CPU. When this
> +feature is enabled the kernel samples what task thread is accessing
> +memory by periodically unmapping pages and later trapping a page
> +fault. At the time of the page fault, it is determined if the data
> +being accessed should be migrated to a local memory node.
>  
>  The unmapping of pages and trapping faults incur additional overhead that
>  ideally is offset by improved memory locality but there is no universal
> @@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms,
>  numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms,
>  numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls.
>  
> +Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among
> +different types of memory (represented as different NUMA nodes) to
> +place the hot pages in the fast memory.  This is implemented based on
> +unmapping and page fault too.

NORMAL | TIERING appears to be a non-sensical combination.

Would it be better to have a tristate (disabled, normal, tiering)
rather than a mask?

> @@ -2034,16 +2035,30 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  {
>  	int page_lru;
>  	int nr_pages = thp_nr_pages(page);
> +	int order = compound_order(page);
>  
> -	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
> +	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>  
>  	/* Do not migrate THP mapped by multiple processes */
>  	if (PageTransHuge(page) && total_mapcount(page) > 1)
>  		return 0;
>  
>  	/* Avoid migrating to a node that is nearly full */
> -	if (!migrate_balanced_pgdat(pgdat, nr_pages))
> +	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
> +		int z;
> +
> +		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) ||
> +		    !numa_demotion_enabled)
> +			return 0;
> +		if (next_demotion_node(pgdat->node_id) == NUMA_NO_NODE)
> +			return 0;

The encoded behavior doesn't seem very user-friendly: Unless the user
enables numa demotion in a separate flag, enabling numa balancing in
tiered mode will silently do nothing.

Would it make more sense to have a central flag for the operation of
tiered memory systems that will enable both promotion and demotion?

Alternatively, it could also ignore the state of demotion and promote
anyway if asked to, resulting in regular reclaim to make room. It
might not be the most popular combination, but would be in line with
the zone_reclaim_mode policy of preferring reclaim over remote
accesses.  It would make the knobs behave more as expected and it's
less convoluted than having flags select other user-visible flags.

> @@ -3966,6 +3967,13 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx)
>  	return false;
>  }
>  
> +/*
> + * Keep the free pages on fast memory node a little more than the high
> + * watermark to accommodate the promoted pages.
> + */
> +#define NUMA_BALANCING_PROMOTE_WATERMARK_DIV	4
> +#define NUMA_BALANCING_PROMOTE_WATERMARK_MIN	(10UL * 1024 * 1024 >> PAGE_SHIFT)
> +
>  /*
>   * Returns true if there is an eligible zone balanced for the request order
>   * and highest_zoneidx
> @@ -3987,6 +3995,15 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
>  			continue;
>  
>  		mark = high_wmark_pages(zone);
> +		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
> +		    numa_demotion_enabled &&
> +		    next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) {
> +			unsigned long promote_mark;
> +
> +			promote_mark = max(NUMA_BALANCING_PROMOTE_WATERMARK_MIN,
> +				mark / NUMA_BALANCING_PROMOTE_WATERMARK_DIV);
> +			mark += promote_mark;
> +		}

I'm not sure about this formula.

The high watermark straddles both the atomic allocation pool
(min_free_kbytes) as well as the direct reclaim latency buffer
(watermark_scale_factor) on top of it. Making the promotion buffer a
quarter of the raw total number conflates the two settings.

E.g. if the user grows the atomic pool to support bursty network rx,
the two direct reclaim buffers between min, low and high stay the
same, but the tier-promotion buffer grows. This is unexpected, and not
what the user intended.

I'm thinking it would be better to do this:

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7c417bec8207..5e067d66f797 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8573,7 +8573,8 @@ static void __setup_per_zone_wmarks(void)
 
 		zone->watermark_boost = 0;
 		zone->_watermark[WMARK_LOW]  = min_wmark_pages(zone) + tmp;
-		zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2;
+		zone->_watermark[WMARK_HIGH] = low_wmark_pages(zone) + tmp;
+		zone->_watermark[WMARK_PROMO] = high_wmark_pages(zone) + tmp;
 
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}

and then have kswapd choose between the high and promo watermarks
depending on the numa balancing mode.

On my 32G desktop with default settings this produces the same
ballpark of values as your code - high/4=7424, tmp=7311.

This would behave more intuitively with the existing watermark tuning
knobs, and I think would make the code overall simpler as well.

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH -V11 2/3] NUMA balancing: optimize page placement for memory tiering system
  2022-01-28  8:27 [PATCH -V11 0/3] NUMA balancing: optimize memory " Huang Ying
@ 2022-01-28  8:27 ` Huang Ying
  2022-02-07 17:47   ` Johannes Weiner
  0 siblings, 1 reply; 6+ messages in thread
From: Huang Ying @ 2022-01-28  8:27 UTC (permalink / raw)
  To: Peter Zijlstra, Mel Gorman
  Cc: linux-mm, linux-kernel, Feng Tang, Huang Ying, Baolin Wang,
	Andrew Morton, Michal Hocko, Rik van Riel, Mel Gorman,
	Dave Hansen, Yang Shi, Zi Yan, Wei Xu, osalvador, Shakeel Butt,
	zhongjiang-ali

With the advent of various new memory types, some machines will have
multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
memory subsystem of these machines can be called memory tiering
system, because the performance of the different types of memory are
usually different.

In such system, because of the memory accessing pattern changing etc,
some pages in the slow memory may become hot globally.  So in this
patch, the NUMA balancing mechanism is enhanced to optimize the page
placement among the different memory types according to hot/cold
dynamically.

In a typical memory tiering system, there are CPUs, fast memory and
slow memory in each physical NUMA node.  The CPUs and the fast memory
will be put in one logical node (called fast memory node), while the
slow memory will be put in another (faked) logical node (called slow
memory node).  That is, the fast memory is regarded as local while the
slow memory is regarded as remote.  So it's possible for the recently
accessed pages in the slow memory node to be promoted to the fast
memory node via the existing NUMA balancing mechanism.

The original NUMA balancing mechanism will stop to migrate pages if
the free memory of the target node becomes below the high watermark.
This is a reasonable policy if there's only one memory type.  But this
makes the original NUMA balancing mechanism almost do not work to
optimize page placement among different memory types.  Details are as
follows.

It's the common cases that the working-set size of the workload is
larger than the size of the fast memory nodes.  Otherwise, it's
unnecessary to use the slow memory at all.  So, there are almost
always no enough free pages in the fast memory nodes, so that the
globally hot pages in the slow memory node cannot be promoted to the
fast memory node.  To solve the issue, we have 2 choices as follows,

a. Ignore the free pages watermark checking when promoting hot pages
   from the slow memory node to the fast memory node.  This will
   create some memory pressure in the fast memory node, thus trigger
   the memory reclaiming.  So that, the cold pages in the fast memory
   node will be demoted to the slow memory node.

b. Make kswapd of the fast memory node to reclaim pages until the
   free pages are a little (for example, high_watermark / 4) more than
   the high watermark.  Then, if the free pages of the fast memory
   node reaches high watermark, and some hot pages need to be
   promoted, kswapd of the fast memory node will be waken up to demote
   more cold pages in the fast memory node to the slow memory node.
   This will free some extra space in the fast memory node, so the hot
   pages in the slow memory node can be promoted to the fast memory
   node.

The choice "a" may create high memory pressure in the fast memory
node.  If the memory pressure of the workload is high, the memory
pressure may become so high that the memory allocation latency of the
workload is influenced, e.g. the direct reclaiming may be triggered.

The choice "b" works much better at this aspect.  If the memory
pressure of the workload is high, the hot pages promotion will stop
earlier because its allocation watermark is higher than that of the
normal memory allocation.  So in this patch, choice "b" is
implemented.

In addition to the original page placement optimization among sockets,
the NUMA balancing mechanism is extended to be used to optimize page
placement according to hot/cold among different memory types.  So the
sysctl user space interface (numa_balancing) is extended in a backward
compatible way as follow, so that the users can enable/disable these
functionality individually.

The sysctl is converted from a Boolean value to a bits field.  The
definition of the flags is,

- 0x0: NUMA_BALANCING_DISABLED
- 0x1: NUMA_BALANCING_NORMAL
- 0x2: NUMA_BALANCING_MEMORY_TIERING

We have tested the patch with the pmbench memory accessing benchmark
with the 80:20 read/write ratio and the Gauss access address
distribution on a 2 socket Intel server with Optane DC Persistent
Memory Model.  The test results shows that the pmbench score can
improve up to 95.9%.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: zhongjiang-ali <zhongjiang-ali@linux.alibaba.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 Documentation/admin-guide/sysctl/kernel.rst | 29 ++++++++++++++-------
 include/linux/sched/sysctl.h                | 10 +++++++
 kernel/sched/core.c                         | 21 ++++++++++++---
 kernel/sysctl.c                             |  2 +-
 mm/migrate.c                                | 19 ++++++++++++--
 mm/vmscan.c                                 | 17 ++++++++++++
 6 files changed, 82 insertions(+), 16 deletions(-)

diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
index d359bcfadd39..ea32ba0c5d3c 100644
--- a/Documentation/admin-guide/sysctl/kernel.rst
+++ b/Documentation/admin-guide/sysctl/kernel.rst
@@ -595,16 +595,23 @@ Documentation/admin-guide/kernel-parameters.rst).
 numa_balancing
 ==============
 
-Enables/disables automatic page fault based NUMA memory
-balancing. Memory is moved automatically to nodes
-that access it often.
+Enables/disables and configure automatic page fault based NUMA memory
+balancing.  Memory is moved automatically to nodes that access it
+often.  The value to set can be the result to OR the following,
 
-Enables/disables automatic NUMA memory balancing. On NUMA machines, there
-is a performance penalty if remote memory is accessed by a CPU. When this
-feature is enabled the kernel samples what task thread is accessing memory
-by periodically unmapping pages and later trapping a page fault. At the
-time of the page fault, it is determined if the data being accessed should
-be migrated to a local memory node.
+= =================================
+0x0 NUMA_BALANCING_DISABLED
+0x1 NUMA_BALANCING_NORMAL
+0x2 NUMA_BALANCING_MEMORY_TIERING
+= =================================
+
+Or NUMA_BALANCING_NORMAL to optimize page placement among different
+NUMA nodes to reduce remote accessing.  On NUMA machines, there is a
+performance penalty if remote memory is accessed by a CPU. When this
+feature is enabled the kernel samples what task thread is accessing
+memory by periodically unmapping pages and later trapping a page
+fault. At the time of the page fault, it is determined if the data
+being accessed should be migrated to a local memory node.
 
 The unmapping of pages and trapping faults incur additional overhead that
 ideally is offset by improved memory locality but there is no universal
@@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms,
 numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms,
 numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls.
 
+Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among
+different types of memory (represented as different NUMA nodes) to
+place the hot pages in the fast memory.  This is implemented based on
+unmapping and page fault too.
 
 numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_size_mb
 ===============================================================================================================================
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index c19dd5a2c05c..b5eec8854c5a 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -23,6 +23,16 @@ enum sched_tunable_scaling {
 	SCHED_TUNABLESCALING_END,
 };
 
+#define NUMA_BALANCING_DISABLED		0x0
+#define NUMA_BALANCING_NORMAL		0x1
+#define NUMA_BALANCING_MEMORY_TIERING	0x2
+
+#ifdef CONFIG_NUMA_BALANCING
+extern int sysctl_numa_balancing_mode;
+#else
+#define sysctl_numa_balancing_mode	0
+#endif
+
 /*
  *  control realtime throttling:
  *
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 848eaa0efe0e..b8b8e5feb8ef 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4279,7 +4279,9 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
 
 #ifdef CONFIG_NUMA_BALANCING
 
-void set_numabalancing_state(bool enabled)
+int sysctl_numa_balancing_mode;
+
+static void __set_numabalancing_state(bool enabled)
 {
 	if (enabled)
 		static_branch_enable(&sched_numa_balancing);
@@ -4287,13 +4289,22 @@ void set_numabalancing_state(bool enabled)
 		static_branch_disable(&sched_numa_balancing);
 }
 
+void set_numabalancing_state(bool enabled)
+{
+	if (enabled)
+		sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL;
+	else
+		sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED;
+	__set_numabalancing_state(enabled);
+}
+
 #ifdef CONFIG_PROC_SYSCTL
 int sysctl_numa_balancing(struct ctl_table *table, int write,
 			  void *buffer, size_t *lenp, loff_t *ppos)
 {
 	struct ctl_table t;
 	int err;
-	int state = static_branch_likely(&sched_numa_balancing);
+	int state = sysctl_numa_balancing_mode;
 
 	if (write && !capable(CAP_SYS_ADMIN))
 		return -EPERM;
@@ -4303,8 +4314,10 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
 	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
 	if (err < 0)
 		return err;
-	if (write)
-		set_numabalancing_state(state);
+	if (write) {
+		sysctl_numa_balancing_mode = state;
+		__set_numabalancing_state(state);
+	}
 	return err;
 }
 #endif
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 5ae443b2882e..c90a564af720 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1689,7 +1689,7 @@ static struct ctl_table kern_table[] = {
 		.mode		= 0644,
 		.proc_handler	= sysctl_numa_balancing,
 		.extra1		= SYSCTL_ZERO,
-		.extra2		= SYSCTL_ONE,
+		.extra2		= SYSCTL_FOUR,
 	},
 #endif /* CONFIG_NUMA_BALANCING */
 	{
diff --git a/mm/migrate.c b/mm/migrate.c
index a5971e9f6e6a..61f7e82e6708 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -51,6 +51,7 @@
 #include <linux/oom.h>
 #include <linux/memory.h>
 #include <linux/random.h>
+#include <linux/sched/sysctl.h>
 
 #include <asm/tlbflush.h>
 
@@ -2034,16 +2035,30 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
 {
 	int page_lru;
 	int nr_pages = thp_nr_pages(page);
+	int order = compound_order(page);
 
-	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
+	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
 
 	/* Do not migrate THP mapped by multiple processes */
 	if (PageTransHuge(page) && total_mapcount(page) > 1)
 		return 0;
 
 	/* Avoid migrating to a node that is nearly full */
-	if (!migrate_balanced_pgdat(pgdat, nr_pages))
+	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
+		int z;
+
+		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) ||
+		    !numa_demotion_enabled)
+			return 0;
+		if (next_demotion_node(pgdat->node_id) == NUMA_NO_NODE)
+			return 0;
+		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
+			if (populated_zone(pgdat->node_zones + z))
+				break;
+		}
+		wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
 		return 0;
+	}
 
 	if (isolate_lru_page(page))
 		return 0;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 08ab556c7678..bc6fbbc8bedd 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -56,6 +56,7 @@
 
 #include <linux/swapops.h>
 #include <linux/balloon_compaction.h>
+#include <linux/sched/sysctl.h>
 
 #include "internal.h"
 
@@ -3966,6 +3967,13 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx)
 	return false;
 }
 
+/*
+ * Keep the free pages on fast memory node a little more than the high
+ * watermark to accommodate the promoted pages.
+ */
+#define NUMA_BALANCING_PROMOTE_WATERMARK_DIV	4
+#define NUMA_BALANCING_PROMOTE_WATERMARK_MIN	(10UL * 1024 * 1024 >> PAGE_SHIFT)
+
 /*
  * Returns true if there is an eligible zone balanced for the request order
  * and highest_zoneidx
@@ -3987,6 +3995,15 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
 			continue;
 
 		mark = high_wmark_pages(zone);
+		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+		    numa_demotion_enabled &&
+		    next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) {
+			unsigned long promote_mark;
+
+			promote_mark = max(NUMA_BALANCING_PROMOTE_WATERMARK_MIN,
+				mark / NUMA_BALANCING_PROMOTE_WATERMARK_DIV);
+			mark += promote_mark;
+		}
 		if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
 			return true;
 	}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-02-18  2:15 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-28 15:24 [PATCH -V11 2/3] NUMA balancing: optimize page placement for memory tiering system kernel test robot
  -- strict thread matches above, loose matches on Subject: below --
2022-01-28  8:27 [PATCH -V11 0/3] NUMA balancing: optimize memory " Huang Ying
2022-01-28  8:27 ` [PATCH -V11 2/3] NUMA balancing: optimize page " Huang Ying
2022-02-07 17:47   ` Johannes Weiner
2022-02-09  5:24     ` Huang, Ying
2022-02-17 16:26       ` Johannes Weiner
2022-02-18  2:15         ` Huang, Ying

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.