All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Mel Gorman <mgorman@techsingularity.net>,
	Dave Hansen <dave.hansen@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Hillf Danton <hdanton@sina.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Michal Hocko <mhocko@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>, "Tang, Feng" <feng.tang@intel.com>
Subject: Re: [PATCH 0/6 v2] Calculate pcp->high based on zone sizes and active CPUs
Date: Fri, 28 May 2021 14:12:09 +0200	[thread overview]
Message-ID: <416f39e7-704a-86d0-8261-dc27366336ab@suse.cz> (raw)
In-Reply-To: <20210528085545.GJ30378@techsingularity.net>

On 5/28/21 10:55 AM, Mel Gorman wrote:
> On Thu, May 27, 2021 at 12:36:21PM -0700, Dave Hansen wrote:
>> Hi Mel,
>> 
>> Feng Tang tossed these on a "Cascade Lake" system with 96 threads and
>> ~512G of persistent memory and 128G of DRAM.  The PMEM is in "volatile
>> use" mode and being managed via the buddy just like the normal RAM.
>> 
>> The PMEM zones are big ones:
>> 
>>         present  65011712 = 248 G
>>         high       134595 = 525 M
>> 
>> The PMEM nodes, of course, don't have any CPUs in them.
>> 
>> With your series, the pcp->high value per-cpu is 69584 pages or about
>> 270MB per CPU.  Scaled up by the 96 CPU threads, that's ~26GB of
>> worst-case memory in the pcps per zone, or roughly 10% of the size of
>> the zone.
>> 
>> I did see quite a few pcp->counts above 60,000, so it's definitely
>> possible in practice to see the pcps filled up.  This was not observed
>> to cause any actual problems in practice.  But, it's still a bit worrisome.
>> 
> 
> Ok, it does have the potential to trigger early reclaim as pages are
> stored on remote PCP lists. The problem would be transient because
> vmstat would drain those pages over time but still, how about this patch
> on top of the series?
> 
> --8<--
> mm/page_alloc: Split pcp->high across all online CPUs for cpuless nodes
> 
> Dave Hansen reported the following about Feng Tang's tests on a machine
> with persistent memory onlined as a DRAM-like device.
> 
>   Feng Tang tossed these on a "Cascade Lake" system with 96 threads and
>   ~512G of persistent memory and 128G of DRAM.  The PMEM is in "volatile
>   use" mode and being managed via the buddy just like the normal RAM.
> 
>   The PMEM zones are big ones:
> 
>         present  65011712 = 248 G
>         high       134595 = 525 M
> 
>   The PMEM nodes, of course, don't have any CPUs in them.
> 
>   With your series, the pcp->high value per-cpu is 69584 pages or about
>   270MB per CPU.  Scaled up by the 96 CPU threads, that's ~26GB of
>   worst-case memory in the pcps per zone, or roughly 10% of the size of
>   the zone.
> 
> This should not cause a problem as such although it could trigger reclaim
> due to pages being stored on per-cpu lists for CPUs remote to a node. It
> is not possible to treat cpuless nodes exactly the same as normal nodes
> but the worst-case scenario can be mitigated by splitting pcp->high across
> all online CPUs for cpuless memory nodes.
> 
> Suggested-by: Dave Hansen <dave.hansen@intel.com>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

Maybe we should even consider distinguishing high limits for local-to-cpu zones
vs remote, for example for the local-to-cpu zones we would divide by the number
of local cpus, for remote-to-cpu zones we would divide by all cpus.

Because we can expect cpus to allocate mostly from local zones, so leaving more
pages on percpu for those zones can be beneficial.

But as the motivation here was to reduce lock contention on freeing, that's less
clear. We probably can't expect the cpu to be freeing mostly local pages (in
case of e.g. a large process exiting), because no mechanism works towards that,
or does it? In case of cpu freeing to remote zone, the lower high limit could hurt.

So that would have to be evaluated if that works in practice. Out of scope here,
just an idea to discuss.

  parent reply	other threads:[~2021-05-28 12:13 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-25  8:01 [PATCH 0/6 v2] Calculate pcp->high based on zone sizes and active CPUs Mel Gorman
2021-05-25  8:01 ` [PATCH 1/6] mm/page_alloc: Delete vm.percpu_pagelist_fraction Mel Gorman
2021-05-26 17:41   ` Vlastimil Babka
2021-05-25  8:01 ` [PATCH 2/6] mm/page_alloc: Disassociate the pcp->high from pcp->batch Mel Gorman
2021-05-26 18:14   ` Vlastimil Babka
2021-05-27 10:52     ` Mel Gorman
2021-05-28 10:27       ` Vlastimil Babka
2021-05-25  8:01 ` [PATCH 3/6] mm/page_alloc: Adjust pcp->high after CPU hotplug events Mel Gorman
2021-05-28 11:08   ` Vlastimil Babka
2021-05-25  8:01 ` [PATCH 4/6] mm/page_alloc: Scale the number of pages that are batch freed Mel Gorman
2021-05-28 11:19   ` Vlastimil Babka
2021-05-25  8:01 ` [PATCH 5/6] mm/page_alloc: Limit the number of pages on PCP lists when reclaim is active Mel Gorman
2021-05-28 11:43   ` Vlastimil Babka
2021-05-25  8:01 ` [PATCH 6/6] mm/page_alloc: Introduce vm.percpu_pagelist_high_fraction Mel Gorman
2021-05-28 11:59   ` Vlastimil Babka
2021-05-28 12:53     ` Mel Gorman
2021-05-28 14:38       ` Vlastimil Babka
2021-05-27 19:36 ` [PATCH 0/6 v2] Calculate pcp->high based on zone sizes and active CPUs Dave Hansen
2021-05-28  8:55   ` Mel Gorman
2021-05-28  9:03     ` David Hildenbrand
2021-05-28  9:08       ` David Hildenbrand
2021-05-28  9:49         ` Mel Gorman
2021-05-28  9:52           ` David Hildenbrand
2021-05-28 10:09             ` Mel Gorman
2021-05-28 10:21               ` David Hildenbrand
2021-05-28 12:12     ` Vlastimil Babka [this message]
2021-05-28 12:37       ` Mel Gorman
2021-05-28 14:39     ` Dave Hansen
2021-05-28 15:18       ` Mel Gorman
2021-05-28 16:17         ` Dave Hansen
2021-05-31 12:00           ` Feng Tang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=416f39e7-704a-86d0-8261-dc27366336ab@suse.cz \
    --to=vbabka@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=dave.hansen@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=feng.tang@intel.com \
    --cc=hdanton@sina.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.