linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Huang Ying <ying.huang@intel.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Arjan Van De Ven <arjan@linux.intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	David Hildenbrand <david@redhat.com>,
	Johannes Weiner <jweiner@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Pavel Tatashin <pasha.tatashin@soleen.com>,
	Matthew Wilcox <willy@infradead.org>
Subject: Re: [RFC 2/2] mm: alloc/free depth based PCP high auto-tuning
Date: Fri, 14 Jul 2023 13:41:28 +0200	[thread overview]
Message-ID: <ZLE0aF/pT9zZeoGt@dhcp22.suse.cz> (raw)
In-Reply-To: <20230712090526.thk2l7sbdcdsllfi@techsingularity.net>

On Wed 12-07-23 10:05:26, Mel Gorman wrote:
> On Tue, Jul 11, 2023 at 01:19:46PM +0200, Michal Hocko wrote:
> > On Mon 10-07-23 14:53:25, Huang Ying wrote:
> > > To auto-tune PCP high for each CPU automatically, an
> > > allocation/freeing depth based PCP high auto-tuning algorithm is
> > > implemented in this patch.
> > > 
> > > The basic idea behind the algorithm is to detect the repetitive
> > > allocation and freeing pattern with short enough period (about 1
> > > second).  The period needs to be short to respond to allocation and
> > > freeing pattern changes quickly and control the memory wasted by
> > > unnecessary caching.
> > 
> > 1s is an ethernity from the allocation POV. Is a time based sampling
> > really a good choice? I would have expected a natural allocation/freeing
> > feedback mechanism. I.e. double the batch size when the batch is
> > consumed and it requires to be refilled and shrink it under memory
> > pressure (GFP_NOWAIT allocation fails) or when the surplus grows too
> > high over batch (e.g. twice as much).  Have you considered something as
> > simple as that?
> > Quite honestly I am not sure time based approach is a good choice
> > because memory consumptions tends to be quite bulky (e.g. application
> > starts or workload transitions based on requests).
> >  
> 
> I tend to agree. Tuning based on the recent allocation pattern without frees
> would make more sense and also be symmetric with how free_factor works. I
> suspect that time-based may be heavily orientated around the will-it-scale
> benchmark. While I only glanced at this, a few things jumped out
> 
> 1. Time-based heuristics are not ideal. congestion_wait() and
>    friends was an obvious case where time-based heuristics fell apart even
>    before the event it waited on was removed. For congestion, it happened to
>    work for slow storage for a while but that was about it.  For allocation
>    stream detection, it has a similar problem. If a process is allocating
>    heavily, then fine, if it's in bursts of less than a second more than one
>    second apart then it will not adapt. While I do not think it is explicitly
>    mentioned anywhere, my understanding was that heuristics like this within
>    mm/ should be driven by explicit events as much as possible and not time.

Agreed. I would also like to point out that it is also important to
realize those events that we should care about. Remember the primary
motivation of the tuning is to reduce the lock contention. That being
said, it is less of a problem to have stream or bursty demand for
memory if that doesn't really cause the said contention, right? So any
auto-tuning should consider that as well and do not inflate the batch
in an absense of the contention. That of course means that a solely
deallocation based monitoring.

-- 
Michal Hocko
SUSE Labs


  parent reply	other threads:[~2023-07-14 11:41 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-10  6:53 [RFC 0/2] mm: PCP high auto-tuning Huang Ying
2023-07-10  6:53 ` [RFC 1/2] mm: add framework for " Huang Ying
2023-07-11 11:07   ` Michal Hocko
2023-07-12  7:45     ` Huang, Ying
2023-07-14  8:59       ` Michal Hocko
2023-07-17  8:19         ` Huang, Ying
2023-07-10  6:53 ` [RFC 2/2] mm: alloc/free depth based " Huang Ying
2023-07-11 11:19   ` Michal Hocko
2023-07-12  9:05     ` Mel Gorman
2023-07-13  8:56       ` Huang, Ying
2023-07-14 14:07         ` Mel Gorman
2023-07-17  9:16           ` Huang, Ying
2023-07-17 13:50             ` Mel Gorman
2023-07-18  0:55               ` Huang, Ying
2023-07-18 12:34                 ` Mel Gorman
2023-07-19  5:59                   ` Huang, Ying
2023-07-19  9:05                     ` Mel Gorman
2023-07-21  7:28                       ` Huang, Ying
2023-07-21  9:21                         ` Mel Gorman
2023-07-24  1:09                           ` Huang, Ying
2023-07-14 11:41       ` Michal Hocko [this message]
2023-07-13  8:11     ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZLE0aF/pT9zZeoGt@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=arjan@linux.intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=jweiner@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=pasha.tatashin@soleen.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).