linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeelb@google.com>
To: Muchun Song <songmuchun@bytedance.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	 Vladimir Davydov <vdavydov.dev@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	 Roman Gushchin <guro@fb.com>,
	Stephen Rothwell <sfr@canb.auug.org.au>,
	Chris Down <chris@chrisdown.name>,
	 Yafang Shao <laoar.shao@gmail.com>,
	Wei Yang <richard.weiyang@gmail.com>,
	 LKML <linux-kernel@vger.kernel.org>,
	Cgroups <cgroups@vger.kernel.org>,  Linux MM <linux-mm@kvack.org>
Subject: Re: [PATCH v2] mm: memcontrol: optimize per-lruvec stats counter memory usage
Date: Tue, 8 Dec 2020 10:20:23 -0800	[thread overview]
Message-ID: <CALvZod7Zt3v-g5gZYUST7_snXPoUijDzPkBT-Kf-ncxpE4W7ng@mail.gmail.com> (raw)
In-Reply-To: <20201208095132.79383-1-songmuchun@bytedance.com>

On Tue, Dec 8, 2020 at 1:53 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), so the type of s32
> of lruvec_stat_cpu is enough. And introduce struct per_cpu_lruvec_stat
> to optimize memory usage.
>
> The size of struct lruvec_stat is 304 bytes on 64 bits system. As it
> is a per-cpu structure. So with this patch, we can save 304 / 2 * ncpu
> bytes per-memcg per-node where ncpu is the number of the possible CPU.
> If there are c memory cgroup (include dying cgroup) and n NUMA node in
> the system. Finally, we can save (152 * ncpu * c * n) bytes.
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Few nits below:

Reviewed-by: Shakeel Butt <shakeelb@google.com>

> ---
> Changes in v1 -> v2:
>  - Update the commit log to point out how many bytes that we can save.
>
>  include/linux/memcontrol.h |  6 +++++-
>  mm/memcontrol.c            | 10 +++++++++-
>  2 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 3febf64d1b80..290d6ec8535a 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -92,6 +92,10 @@ struct lruvec_stat {
>         long count[NR_VM_NODE_STAT_ITEMS];
>  };
>
> +struct per_cpu_lruvec_stat {

lruvec_stat is also per-cpu, so the name per_cpu_lruvec_stat does not
really tell why it is different from lruvec. Maybe name is
batched_lruvec_stat or something else.

> +       s32 count[NR_VM_NODE_STAT_ITEMS];
> +};
> +
>  /*
>   * Bitmap of shrinker::id corresponding to memcg-aware shrinkers,
>   * which have elements charged to this memcg.
> @@ -111,7 +115,7 @@ struct mem_cgroup_per_node {
>         struct lruvec_stat __percpu *lruvec_stat_local;

A comment for the above why it still needs to be lruvec_stat.

>
>         /* Subtree VM stats (batched updates) */
> -       struct lruvec_stat __percpu *lruvec_stat_cpu;
> +       struct per_cpu_lruvec_stat __percpu *lruvec_stat_cpu;
>         atomic_long_t           lruvec_stat[NR_VM_NODE_STAT_ITEMS];
>
>         unsigned long           lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS];
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index eec44918d373..da6dc6ca388d 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -5198,7 +5198,7 @@ static int alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node)
>                 return 1;
>         }
>
> -       pn->lruvec_stat_cpu = alloc_percpu_gfp(struct lruvec_stat,
> +       pn->lruvec_stat_cpu = alloc_percpu_gfp(struct per_cpu_lruvec_stat,
>                                                GFP_KERNEL_ACCOUNT);
>         if (!pn->lruvec_stat_cpu) {
>                 free_percpu(pn->lruvec_stat_local);
> @@ -7089,6 +7089,14 @@ static int __init mem_cgroup_init(void)
>  {
>         int cpu, node;
>
> +       /*
> +        * Currently s32 type (can refer to struct per_cpu_lruvec_stat) is
> +        * used for per-memcg-per-cpu caching of per-node statistics. In order
> +        * to work fine, we should make sure that the overfill threshold can't
> +        * exceed S32_MAX / PAGE_SIZE.
> +        */
> +       BUILD_BUG_ON(MEMCG_CHARGE_BATCH > S32_MAX / PAGE_SIZE);
> +
>         cpuhp_setup_state_nocalls(CPUHP_MM_MEMCQ_DEAD, "mm/memctrl:dead", NULL,
>                                   memcg_hotplug_cpu_dead);
>
> --
> 2.11.0
>


  reply	other threads:[~2020-12-08 18:20 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-08  9:51 [PATCH v2] mm: memcontrol: optimize per-lruvec stats counter memory usage Muchun Song
2020-12-08 18:20 ` Shakeel Butt [this message]
2020-12-09  2:21 ` Roman Gushchin
2020-12-09  2:28   ` Roman Gushchin
2020-12-09  2:31   ` [External] " Muchun Song
2020-12-09  3:52     ` Roman Gushchin
2020-12-09  7:05       ` Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALvZod7Zt3v-g5gZYUST7_snXPoUijDzPkBT-Kf-ncxpE4W7ng@mail.gmail.com \
    --to=shakeelb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=chris@chrisdown.name \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=laoar.shao@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=richard.weiyang@gmail.com \
    --cc=sfr@canb.auug.org.au \
    --cc=songmuchun@bytedance.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).