linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Michal Hocko <mhocko@suse.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Arjan Van De Ven <arjan@linux.intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	David Hildenbrand <david@redhat.com>,
	Johannes Weiner <jweiner@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Pavel Tatashin <pasha.tatashin@soleen.com>,
	Matthew Wilcox <willy@infradead.org>
Subject: Re: [RFC 2/2] mm: alloc/free depth based PCP high auto-tuning
Date: Mon, 17 Jul 2023 14:50:17 +0100	[thread overview]
Message-ID: <20230717135017.7ro76lsaninbazvf@techsingularity.net> (raw)
In-Reply-To: <87pm4qdhk4.fsf@yhuang6-desk2.ccr.corp.intel.com>

On Mon, Jul 17, 2023 at 05:16:11PM +0800, Huang, Ying wrote:
> Mel Gorman <mgorman@techsingularity.net> writes:
> 
> > Batch should have a much lower maximum than high because it's a deferred cost
> > that gets assigned to an arbitrary task. The worst case is where a process
> > that is a light user of the allocator incurs the full cost of a refill/drain.
> >
> > Again, intuitively this may be PID Control problem for the "Mix" case
> > to estimate the size of high required to minimise drains/allocs as each
> > drain/alloc is potentially a lock contention. The catchall for corner
> > cases would be to decay high from vmstat context based on pcp->expires. The
> > decay would prevent the "high" being pinned at an artifically high value
> > without any zone lock contention for prolonged periods of time and also
> > mitigate worst-case due to state being per-cpu. The downside is that "high"
> > would also oscillate for a continuous steady allocation pattern as the PID
> > control might pick an ideal value suitable for a long period of time with
> > the "decay" disrupting that ideal value.
> 
> Maybe we can track the minimal value of pcp->count.  If it's small
> enough recently, we can avoid to decay pcp->high.  Because the pages in
> PCP are used for allocations instead of idle.

Implement as a separate patch. I suspect this type of heuristic will be
very benchmark specific and the complexity may not be worth it in the
general case.

> 
> Another question is as follows.
> 
> For example, on CPU A, a large number of pages are freed, and we
> maximize batch and high.  So, a large number of pages are put in PCP.
> Then, the possible situations may be,
> 
> a) a large number of pages are allocated on CPU A after some time
> b) a large number of pages are allocated on another CPU B
> 
> For a), we want the pages are kept in PCP of CPU A as long as possible.
> For b), we want the pages are kept in PCP of CPU A as short as possible.
> I think that we need to balance between them.  What is the reasonable
> time to keep pages in PCP without many allocations?
> 

This would be a case where you're relying on vmstat to drain the PCP after
a period of time as it is a corner case. You cannot reasonably detect the
pattern on two separate per-cpu lists without either inspecting remote CPU
state or maintaining global state. Either would incur cache miss penalties
that probably cost more than the heuristic saves.

-- 
Mel Gorman
SUSE Labs


  reply	other threads:[~2023-07-17 13:50 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-10  6:53 [RFC 0/2] mm: PCP high auto-tuning Huang Ying
2023-07-10  6:53 ` [RFC 1/2] mm: add framework for " Huang Ying
2023-07-11 11:07   ` Michal Hocko
2023-07-12  7:45     ` Huang, Ying
2023-07-14  8:59       ` Michal Hocko
2023-07-17  8:19         ` Huang, Ying
2023-07-10  6:53 ` [RFC 2/2] mm: alloc/free depth based " Huang Ying
2023-07-11 11:19   ` Michal Hocko
2023-07-12  9:05     ` Mel Gorman
2023-07-13  8:56       ` Huang, Ying
2023-07-14 14:07         ` Mel Gorman
2023-07-17  9:16           ` Huang, Ying
2023-07-17 13:50             ` Mel Gorman [this message]
2023-07-18  0:55               ` Huang, Ying
2023-07-18 12:34                 ` Mel Gorman
2023-07-19  5:59                   ` Huang, Ying
2023-07-19  9:05                     ` Mel Gorman
2023-07-21  7:28                       ` Huang, Ying
2023-07-21  9:21                         ` Mel Gorman
2023-07-24  1:09                           ` Huang, Ying
2023-07-14 11:41       ` Michal Hocko
2023-07-13  8:11     ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230717135017.7ro76lsaninbazvf@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=akpm@linux-foundation.org \
    --cc=arjan@linux.intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=jweiner@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=pasha.tatashin@soleen.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).