linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: "Huang, Ying" <ying.huang@intel.com>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Jagdish Gediya <jvgediya@linux.ibm.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	akpm@linux-foundation.org, dave.hansen@linux.intel.com,
	Fan Du <fan.du@intel.com>
Subject: Re: [PATCH] mm: migrate: set demotion targets differently
Date: Thu, 31 Mar 2022 17:33:15 +0800	[thread overview]
Message-ID: <ef5ab5e9-e503-771f-a141-dffcef886256@linux.alibaba.com> (raw)
In-Reply-To: <8735iybisn.fsf@yhuang6-desk2.ccr.corp.intel.com>



On 3/31/2022 4:58 PM, Huang, Ying wrote:
> "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
> 
>> "Huang, Ying" <ying.huang@intel.com> writes:
>>
>>> "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
>>>
>>>> "Huang, Ying" <ying.huang@intel.com> writes:
>>>>
>>>>> "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
>>>>>
>>>>>> "Huang, Ying" <ying.huang@intel.com> writes:
>>>>>>
>>>>>>> Hi, Jagdish,
>>>>>>>
>>>>>>> Jagdish Gediya <jvgediya@linux.ibm.com> writes:
>>>>>>>
>>>>>>
>>>>>> ...
>>>>>>
>>>>>>>> e.g. with below NUMA topology, where node 0 & 1 are
>>>>>>>> cpu + dram nodes, node 2 & 3 are equally slower memory
>>>>>>>> only nodes, and node 4 is slowest memory only node,
>>>>>>>>
>>>>>>>> available: 5 nodes (0-4)
>>>>>>>> node 0 cpus: 0 1
>>>>>>>> node 0 size: n MB
>>>>>>>> node 0 free: n MB
>>>>>>>> node 1 cpus: 2 3
>>>>>>>> node 1 size: n MB
>>>>>>>> node 1 free: n MB
>>>>>>>> node 2 cpus:
>>>>>>>> node 2 size: n MB
>>>>>>>> node 2 free: n MB
>>>>>>>> node 3 cpus:
>>>>>>>> node 3 size: n MB
>>>>>>>> node 3 free: n MB
>>>>>>>> node 4 cpus:
>>>>>>>> node 4 size: n MB
>>>>>>>> node 4 free: n MB
>>>>>>>> node distances:
>>>>>>>> node   0   1   2   3   4
>>>>>>>>    0:  10  20  40  40  80
>>>>>>>>    1:  20  10  40  40  80
>>>>>>>>    2:  40  40  10  40  80
>>>>>>>>    3:  40  40  40  10  80
>>>>>>>>    4:  80  80  80  80  10
>>>>>>>>
>>>>>>>> The existing implementation gives below demotion targets,
>>>>>>>>
>>>>>>>> node    demotion_target
>>>>>>>>   0              3, 2
>>>>>>>>   1              4
>>>>>>>>   2              X
>>>>>>>>   3              X
>>>>>>>>   4		X
>>>>>>>>
>>>>>>>> With this patch applied, below are the demotion targets,
>>>>>>>>
>>>>>>>> node    demotion_target
>>>>>>>>   0              3, 2
>>>>>>>>   1              3, 2
>>>>>>>>   2              3
>>>>>>>>   3              4
>>>>>>>>   4		X
>>>>>>>
>>>>>>> For such machine, I think the perfect demotion order is,
>>>>>>>
>>>>>>> node    demotion_target
>>>>>>>   0              2, 3
>>>>>>>   1              2, 3
>>>>>>>   2              4
>>>>>>>   3              4
>>>>>>>   4              X
>>>>>>
>>>>>> I guess the "equally slow nodes" is a confusing definition here. Now if the
>>>>>> system consists of 2 1GB equally slow memory and the firmware doesn't want to
>>>>>> differentiate between them, firmware can present a single NUMA node
>>>>>> with 2GB capacity? The fact that we are finding two NUMA nodes is a hint
>>>>>> that there is some difference between these two memory devices. This is
>>>>>> also captured by the fact that the distance between 2 and 3 is 40 and not 10.
>>>>>
>>>>> Do you have more information about this?
>>>>
>>>> Not sure I follow the question there. I was checking shouldn't firmware
>>>> do a single NUMA node if two memory devices are of the same type? How will
>>>> optane present such a config? Both the DIMMs will have the same
>>>> proximity domain value and hence dax kmem will add them to the same NUMA
>>>> node?
>>>
>>> Sorry for confusing.  I just wanted to check whether you have more
>>> information about the machine configuration above.  The machines in my
>>> hand have no complex NUMA topology as in the patch description.
>>
>>
>> Even with simple topologies like below
>>
>> available: 3 nodes (0-2)
>> node 0 cpus: 0 1
>> node 0 size: 4046 MB
>> node 0 free: 3478 MB
>> node 1 cpus: 2 3
>> node 1 size: 4090 MB
>> node 1 free: 3430 MB
>> node 2 cpus:
>> node 2 size: 4074 MB
>> node 2 free: 4037 MB
>> node distances:
>> node   0   1   2
>>    0:  10  20  40
>>    1:  20  10  40
>>    2:  40  40  10
>>
>> With current code we get demotion targets assigned as below
>>
>> [    0.337307] Demotion nodes for Node 0: 2
>> [    0.337351] Demotion nodes for Node 1:
>> [    0.337380] Demotion nodes for Node 2:
>>
>> I guess we should fix that to be below?
>>
>> [    0.344554] Demotion nodes for Node 0: 2
>> [    0.344605] Demotion nodes for Node 1: 2
>> [    0.344638] Demotion nodes for Node 2:
> 
> If the cross-socket link has enough bandwidth to accommodate the PMEM
> throughput, the new one is better.  If it hasn't, the old one may be
> better.  So, I think we need some kind of user space overridden support
> here.  Right?
> 
>> Most of the tests we are doing are using Qemu to simulate this. We
>> started looking at this to avoid using demotion completely when slow
>> memory is not present. ie, we should have a different way to identify
>> demotion targets other than node_states[N_MEMORY]. Virtualized platforms
>> can have configs with memory only NUMA nodes with DRAM and we don't
>> want to consider those as demotion targets.
> 
> Even if the demotion targets are set for some node, the demotion will
> not work before enabling demotion via sysfs
> (/sys/kernel/mm/numa/demotion_enabled).  So for system without slow
> memory, just don't enable demotion.
> 
>> While we are at it can you let us know how topology will look on a
>> system with two optane DIMMs? Do both appear with the same
>> target_node?
> 
> In my test system, multiple optane DIMMs in one socket will be
> represented as one NUMA node.
> 
> I remember Baolin has different configuration.
> 
> Hi, Baolin,  Can you provide some information about this?

Sure. We have real machines with 2 optane DIMMs, and they are 
represented as 2 numa nodes. So we want to support the target demotion 
nodes can be multiple.

available: 3 nodes (0-2)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
node 0 size: 62153 MB
node 0 free: 447 MB
node 1 cpus:
node 1 size: 126969 MB
node 1 free: 84099 MB
node 2 cpus:
node 2 size: 127006 MB
node 2 free: 126925 MB
node distances:
node   0   1   2
   0:  10  20  20
   1:  20  10  20
   2:  20  20  10


      reply	other threads:[~2022-03-31  9:32 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-29 11:52 [PATCH] mm: migrate: set demotion targets differently Jagdish Gediya
2022-03-29 12:26 ` Baolin Wang
2022-03-29 14:04   ` Jagdish Gediya
2022-03-30  6:37     ` Baolin Wang
2022-03-30  6:54       ` Huang, Ying
2022-03-29 14:31 ` Dave Hansen
2022-03-29 16:46   ` Jagdish Gediya
2022-03-29 22:40     ` Dave Hansen
2022-03-30  6:46 ` Huang, Ying
2022-03-30 16:36   ` Jagdish Gediya
2022-03-31  0:27     ` Huang, Ying
2022-03-31 11:17     ` Jonathan Cameron
2022-03-30 17:17   ` Aneesh Kumar K.V
2022-03-31  0:32     ` Huang, Ying
2022-03-31  6:45       ` Aneesh Kumar K.V
2022-03-31  7:23         ` Huang, Ying
2022-03-31  8:27           ` Aneesh Kumar K.V
2022-03-31  8:58             ` Huang, Ying
2022-03-31  9:33               ` Baolin Wang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ef5ab5e9-e503-771f-a141-dffcef886256@linux.alibaba.com \
    --to=baolin.wang@linux.alibaba.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=fan.du@intel.com \
    --cc=jvgediya@linux.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).