linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: "Chu,Kaiping" <chukaiping@baidu.com>,
	"mcgrof@kernel.org" <mcgrof@kernel.org>,
	"keescook@chromium.org" <keescook@chromium.org>,
	"yzaikin@google.com" <yzaikin@google.com>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"nigupta@nvidia.com" <nigupta@nvidia.com>,
	"bhe@redhat.com" <bhe@redhat.com>,
	"khalid.aziz@oracle.com" <khalid.aziz@oracle.com>,
	"iamjoonsoo.kim@lge.com" <iamjoonsoo.kim@lge.com>,
	"mateusznosek0@gmail.com" <mateusznosek0@gmail.com>,
	"sh_def@163.com" <sh_def@163.com>,
	Charan Teja Kalla <charante@codeaurora.org>,
	David Rientjes <rientjes@google.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: 答复: [PATCH v4] mm/compaction: let proactive compaction order configurable
Date: Wed, 16 Jun 2021 15:49:52 +0200	[thread overview]
Message-ID: <5007cc13-334b-bd73-857f-8e57c6e2397e@suse.cz> (raw)
In-Reply-To: <a49d590f143e40c8bda4db22111f49b7@baidu.com>

On 6/1/21 3:15 AM, Chu,Kaiping wrote:
> 
> 
>> -----邮件原件-----
>> 发件人: Vlastimil Babka <vbabka@suse.cz>
>> 发送时间: 2021年5月29日 1:42
>> 收件人: Chu,Kaiping <chukaiping@baidu.com>; mcgrof@kernel.org;
>> keescook@chromium.org; yzaikin@google.com; akpm@linux-foundation.org;
>> nigupta@nvidia.com; bhe@redhat.com; khalid.aziz@oracle.com;
>> iamjoonsoo.kim@lge.com; mateusznosek0@gmail.com; sh_def@163.com
>> 抄送: linux-kernel@vger.kernel.org; linux-fsdevel@vger.kernel.org;
>> linux-mm@kvack.org
>> 主题: Re: [PATCH v4] mm/compaction: let proactive compaction order
>> configurable
>> 
>> On 4/28/21 4:28 AM, chukaiping wrote:
>> > Currently the proactive compaction order is fixed to
>> > COMPACTION_HPAGE_ORDER(9), it's OK in most machines with lots of
>> > normal 4KB memory, but it's too high for the machines with small
>> > normal memory, for example the machines with most memory configured as
>> > 1GB hugetlbfs huge pages. In these machines the max order of free
>> > pages is often below 9, and it's always below 9 even with hard
>> > compaction. This will lead to proactive compaction be triggered very
>> > frequently.
>> 
>> Could you be more concrete about "very frequently"? There's a
>> proactive_defer mechanism that should help here. Normally the proactive
>> compaction attempt happens each 500ms, but if it fails to improve the
>> fragmentation score, it defers for 32 seconds. So is 32 seconds still too
>> frequent? Or the score does improve thus defer doesn't happen, but the cost
>> of that improvement is too high compared to the amount of the
>> improvement?
> I didn't measure the frequency accurately, I only judge it from code. The defer of 32 seconds is still very short to us, we want the proactive compaction running period to be hours.

Hours sounds like a lot, and maybe something that would indeed be easier to
accomplies with userspace proactive compaction triggering [1] than any carefully
tuned thresholds.

But with that low frequency, doesn't the kswapd+kcompactd non-proactive
compaction actually happen more frequently? That one should react to the order
that the allocation waking up kswapd requested, AFAIK.

[1] https://lore.kernel.org/linux-doc/cover.1622454385.git.charante@codeaurora.org/

> 
>> 
>> > In these machines we only care about order of 3 or 4.
>> > This patch export the oder to proc and let it configurable by user,
>> > and the default value is still COMPACTION_HPAGE_ORDER.
>> >
>> > Signed-off-by: chukaiping <chukaiping@baidu.com>
>> > Reported-by: kernel test robot <lkp@intel.com>
>> > ---
>> >
>> > Changes in v4:
>> >     - change the sysctl file name to proactive_compation_order
>> >
>> > Changes in v3:
>> >     - change the min value of compaction_order to 1 because the
>> fragmentation
>> >       index of order 0 is always 0
>> >     - move the definition of max_buddy_zone into #ifdef
>> > CONFIG_COMPACTION
>> >
>> > Changes in v2:
>> >     - fix the compile error in ia64 and powerpc, move the initialization
>> >       of sysctl_compaction_order to kcompactd_init because
>> >       COMPACTION_HPAGE_ORDER is a variable in these architectures
>> >     - change the hard coded max order number from 10 to MAX_ORDER - 1
>> >
>> >  include/linux/compaction.h |    1 +
>> >  kernel/sysctl.c            |   10 ++++++++++
>> >  mm/compaction.c            |   12 ++++++++----
>> >  3 files changed, 19 insertions(+), 4 deletions(-)
>> >
>> > diff --git a/include/linux/compaction.h b/include/linux/compaction.h
>> > index ed4070e..a0226b1 100644
>> > --- a/include/linux/compaction.h
>> > +++ b/include/linux/compaction.h
>> > @@ -83,6 +83,7 @@ static inline unsigned long compact_gap(unsigned int
>> > order)  #ifdef CONFIG_COMPACTION  extern int sysctl_compact_memory;
>> > extern unsigned int sysctl_compaction_proactiveness;
>> > +extern unsigned int sysctl_proactive_compaction_order;
>> >  extern int sysctl_compaction_handler(struct ctl_table *table, int write,
>> >  			void *buffer, size_t *length, loff_t *ppos);  extern int
>> > sysctl_extfrag_threshold; diff --git a/kernel/sysctl.c
>> > b/kernel/sysctl.c index 62fbd09..ed9012e 100644
>> > --- a/kernel/sysctl.c
>> > +++ b/kernel/sysctl.c
>> > @@ -196,6 +196,7 @@ enum sysctl_writes_mode {  #endif /*
>> > CONFIG_SCHED_DEBUG */
>> >
>> >  #ifdef CONFIG_COMPACTION
>> > +static int max_buddy_zone = MAX_ORDER - 1;
>> >  static int min_extfrag_threshold;
>> >  static int max_extfrag_threshold = 1000;  #endif @@ -2871,6 +2872,15
>> > @@ int proc_do_static_key(struct ctl_table *table, int write,
>> >  		.extra2		= &one_hundred,
>> >  	},
>> >  	{
>> > +		.procname       = "proactive_compation_order",
>> > +		.data           = &sysctl_proactive_compaction_order,
>> > +		.maxlen         = sizeof(sysctl_proactive_compaction_order),
>> > +		.mode           = 0644,
>> > +		.proc_handler   = proc_dointvec_minmax,
>> > +		.extra1         = SYSCTL_ONE,
>> > +		.extra2         = &max_buddy_zone,
>> > +	},
>> > +	{
>> >  		.procname	= "extfrag_threshold",
>> >  		.data		= &sysctl_extfrag_threshold,
>> >  		.maxlen		= sizeof(int),
>> > diff --git a/mm/compaction.c b/mm/compaction.c index e04f447..171436e
>> > 100644
>> > --- a/mm/compaction.c
>> > +++ b/mm/compaction.c
>> > @@ -1925,17 +1925,18 @@ static bool kswapd_is_running(pg_data_t
>> > *pgdat)
>> >
>> >  /*
>> >   * A zone's fragmentation score is the external fragmentation wrt to
>> > the
>> > - * COMPACTION_HPAGE_ORDER. It returns a value in the range [0, 100].
>> > + * sysctl_proactive_compaction_order. It returns a value in the range
>> > + * [0, 100].
>> >   */
>> >  static unsigned int fragmentation_score_zone(struct zone *zone)  {
>> > -	return extfrag_for_order(zone, COMPACTION_HPAGE_ORDER);
>> > +	return extfrag_for_order(zone, sysctl_proactive_compaction_order);
>> >  }
>> >
>> >  /*
>> >   * A weighted zone's fragmentation score is the external
>> > fragmentation
>> > - * wrt to the COMPACTION_HPAGE_ORDER scaled by the zone's size. It
>> > - * returns a value in the range [0, 100].
>> > + * wrt to the sysctl_proactive_compaction_order scaled by the zone's size.
>> > + * It returns a value in the range [0, 100].
>> >   *
>> >   * The scaling factor ensures that proactive compaction focuses on larger
>> >   * zones like ZONE_NORMAL, rather than smaller, specialized zones
>> > like @@ -2666,6 +2667,7 @@ static void compact_nodes(void)
>> >   * background. It takes values in the range [0, 100].
>> >   */
>> >  unsigned int __read_mostly sysctl_compaction_proactiveness = 20;
>> > +unsigned int __read_mostly sysctl_proactive_compaction_order;
>> >
>> >  /*
>> >   * This is the entry point for compacting all nodes via @@ -2958,6
>> > +2960,8 @@ static int __init kcompactd_init(void)
>> >  	int nid;
>> >  	int ret;
>> >
>> > +	sysctl_proactive_compaction_order = COMPACTION_HPAGE_ORDER;
>> > +
>> >  	ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
>> >  					"mm/compaction:online",
>> >  					kcompactd_cpu_online, NULL);
>> >
> 


  reply	other threads:[~2021-06-16 13:50 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-28  2:28 [PATCH v4] mm/compaction: let proactive compaction order configurable chukaiping
2021-05-10  0:17 ` Andrew Morton
2021-05-10  2:10   ` 答复: " Chu,Kaiping
2021-05-11  4:20   ` David Rientjes
2021-05-28 17:42 ` Vlastimil Babka
2021-06-01  1:15   ` 答复: " Chu,Kaiping
2021-06-16 13:49     ` Vlastimil Babka [this message]
2021-06-09 10:44 ` David Hildenbrand
2021-06-15  1:11   ` 答复: " Chu,Kaiping
2021-06-15  8:04     ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5007cc13-334b-bd73-857f-8e57c6e2397e@suse.cz \
    --to=vbabka@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=bhe@redhat.com \
    --cc=charante@codeaurora.org \
    --cc=chukaiping@baidu.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=keescook@chromium.org \
    --cc=khalid.aziz@oracle.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mateusznosek0@gmail.com \
    --cc=mcgrof@kernel.org \
    --cc=nigupta@nvidia.com \
    --cc=rientjes@google.com \
    --cc=sh_def@163.com \
    --cc=yzaikin@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).