From: Shakeel Butt <shakeelb@google.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Roman Gushchin <guro@fb.com>, Michal Hocko <mhocko@kernel.org>,
Linux MM <linux-mm@kvack.org>, Cgroups <cgroups@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] memcg: optimize memory.numa_stat like memory.stat
Date: Thu, 5 Mar 2020 20:54:39 -0800 [thread overview]
Message-ID: <CALvZod7W-Qwa4BRKW0_Ts5f68fwkcqD72SF_4NqZRgEMgA_1-g@mail.gmail.com> (raw)
In-Reply-To: <20200305204109.be23f5053e2368d3b8ccaa06@linux-foundation.org>
On Thu, Mar 5, 2020 at 8:41 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Tue, 3 Mar 2020 18:20:58 -0800 Shakeel Butt <shakeelb@google.com> wrote:
>
> > Currently reading memory.numa_stat traverses the underlying memcg tree
> > multiple times to accumulate the stats to present the hierarchical view
> > of the memcg tree. However the kernel already maintains the hierarchical
> > view of the stats and use it in memory.stat. Just use the same mechanism
> > in memory.numa_stat as well.
> >
> > I ran a simple benchmark which reads root_mem_cgroup's memory.numa_stat
> > file in the presense of 10000 memcgs. The results are:
> >
> > Without the patch:
> > $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null
> >
> > real 0m0.700s
> > user 0m0.001s
> > sys 0m0.697s
> >
> > With the patch:
> > $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null
> >
> > real 0m0.001s
> > user 0m0.001s
> > sys 0m0.000s
> >
>
> Can't you do better than that ;)
>
> >
> > + page_state = tree ? lruvec_page_state : lruvec_page_state_local;
> > ...
> >
> > + page_state = tree ? memcg_page_state : memcg_page_state_local;
> >
>
> All four of these functions are inlined. Taking their address in this
> fashion will force the compiler to generate out-of-line copies.
>
> If we do it the uglier-and-maybe-a-bit-slower way:
>
> --- a/mm/memcontrol.c~memcg-optimize-memorynuma_stat-like-memorystat-fix
> +++ a/mm/memcontrol.c
> @@ -3658,17 +3658,16 @@ static unsigned long mem_cgroup_node_nr_
> struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
> unsigned long nr = 0;
> enum lru_list lru;
> - unsigned long (*page_state)(struct lruvec *lruvec,
> - enum node_stat_item idx);
>
> VM_BUG_ON((unsigned)nid >= nr_node_ids);
>
> - page_state = tree ? lruvec_page_state : lruvec_page_state_local;
> -
> for_each_lru(lru) {
> if (!(BIT(lru) & lru_mask))
> continue;
> - nr += page_state(lruvec, NR_LRU_BASE + lru);
> + if (tree)
> + nr += lruvec_page_state(lruvec, NR_LRU_BASE + lru);
> + else
> + nr += lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
> }
> return nr;
> }
> @@ -3679,14 +3678,14 @@ static unsigned long mem_cgroup_nr_lru_p
> {
> unsigned long nr = 0;
> enum lru_list lru;
> - unsigned long (*page_state)(struct mem_cgroup *memcg, int idx);
> -
> - page_state = tree ? memcg_page_state : memcg_page_state_local;
>
> for_each_lru(lru) {
> if (!(BIT(lru) & lru_mask))
> continue;
> - nr += page_state(memcg, NR_LRU_BASE + lru);
> + if (tree)
> + nr += memcg_page_state(memcg, NR_LRU_BASE + lru);
> + else
> + nr += memcg_page_state_local(memcg, NR_LRU_BASE + lru);
> }
> return nr;
> }
>
> Then we get:
>
> text data bss dec hex filename
> now: 106705 35641 1024 143370 2300a mm/memcontrol.o
> shakeel: 107111 35657 1024 143792 231b0 mm/memcontrol.o
> shakeel+the-above: 106805 35657 1024 143486 2307e mm/memcontrol.o
>
> Which do we prefer? The 100-byte patch or the 406-byte patch?
I would go with the 100-byte one. The for-loop is just 5 iteration, so
doing a check in each iteration should not be an issue.
Shakeel
WARNING: multiple messages have this Message-ID (diff)
From: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
To: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>,
Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
Linux MM <linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
Cgroups <cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
LKML <linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
Subject: Re: [PATCH] memcg: optimize memory.numa_stat like memory.stat
Date: Thu, 5 Mar 2020 20:54:39 -0800 [thread overview]
Message-ID: <CALvZod7W-Qwa4BRKW0_Ts5f68fwkcqD72SF_4NqZRgEMgA_1-g@mail.gmail.com> (raw)
In-Reply-To: <20200305204109.be23f5053e2368d3b8ccaa06-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
On Thu, Mar 5, 2020 at 8:41 PM Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org> wrote:
>
> On Tue, 3 Mar 2020 18:20:58 -0800 Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
>
> > Currently reading memory.numa_stat traverses the underlying memcg tree
> > multiple times to accumulate the stats to present the hierarchical view
> > of the memcg tree. However the kernel already maintains the hierarchical
> > view of the stats and use it in memory.stat. Just use the same mechanism
> > in memory.numa_stat as well.
> >
> > I ran a simple benchmark which reads root_mem_cgroup's memory.numa_stat
> > file in the presense of 10000 memcgs. The results are:
> >
> > Without the patch:
> > $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null
> >
> > real 0m0.700s
> > user 0m0.001s
> > sys 0m0.697s
> >
> > With the patch:
> > $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null
> >
> > real 0m0.001s
> > user 0m0.001s
> > sys 0m0.000s
> >
>
> Can't you do better than that ;)
>
> >
> > + page_state = tree ? lruvec_page_state : lruvec_page_state_local;
> > ...
> >
> > + page_state = tree ? memcg_page_state : memcg_page_state_local;
> >
>
> All four of these functions are inlined. Taking their address in this
> fashion will force the compiler to generate out-of-line copies.
>
> If we do it the uglier-and-maybe-a-bit-slower way:
>
> --- a/mm/memcontrol.c~memcg-optimize-memorynuma_stat-like-memorystat-fix
> +++ a/mm/memcontrol.c
> @@ -3658,17 +3658,16 @@ static unsigned long mem_cgroup_node_nr_
> struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
> unsigned long nr = 0;
> enum lru_list lru;
> - unsigned long (*page_state)(struct lruvec *lruvec,
> - enum node_stat_item idx);
>
> VM_BUG_ON((unsigned)nid >= nr_node_ids);
>
> - page_state = tree ? lruvec_page_state : lruvec_page_state_local;
> -
> for_each_lru(lru) {
> if (!(BIT(lru) & lru_mask))
> continue;
> - nr += page_state(lruvec, NR_LRU_BASE + lru);
> + if (tree)
> + nr += lruvec_page_state(lruvec, NR_LRU_BASE + lru);
> + else
> + nr += lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
> }
> return nr;
> }
> @@ -3679,14 +3678,14 @@ static unsigned long mem_cgroup_nr_lru_p
> {
> unsigned long nr = 0;
> enum lru_list lru;
> - unsigned long (*page_state)(struct mem_cgroup *memcg, int idx);
> -
> - page_state = tree ? memcg_page_state : memcg_page_state_local;
>
> for_each_lru(lru) {
> if (!(BIT(lru) & lru_mask))
> continue;
> - nr += page_state(memcg, NR_LRU_BASE + lru);
> + if (tree)
> + nr += memcg_page_state(memcg, NR_LRU_BASE + lru);
> + else
> + nr += memcg_page_state_local(memcg, NR_LRU_BASE + lru);
> }
> return nr;
> }
>
> Then we get:
>
> text data bss dec hex filename
> now: 106705 35641 1024 143370 2300a mm/memcontrol.o
> shakeel: 107111 35657 1024 143792 231b0 mm/memcontrol.o
> shakeel+the-above: 106805 35657 1024 143486 2307e mm/memcontrol.o
>
> Which do we prefer? The 100-byte patch or the 406-byte patch?
I would go with the 100-byte one. The for-loop is just 5 iteration, so
doing a check in each iteration should not be an issue.
Shakeel
next prev parent reply other threads:[~2020-03-06 4:54 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-04 2:20 [PATCH] memcg: optimize memory.numa_stat like memory.stat Shakeel Butt
2020-03-04 2:20 ` Shakeel Butt
2020-03-04 2:20 ` Shakeel Butt
2020-03-06 4:41 ` Andrew Morton
2020-03-06 4:41 ` Andrew Morton
2020-03-06 4:54 ` Shakeel Butt [this message]
2020-03-06 4:54 ` Shakeel Butt
2020-03-06 4:54 ` Shakeel Butt
2020-04-23 22:59 ` Shakeel Butt
2020-04-23 22:59 ` Shakeel Butt
2020-04-23 23:10 ` Andrew Morton
2020-04-24 2:38 ` Johannes Weiner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALvZod7W-Qwa4BRKW0_Ts5f68fwkcqD72SF_4NqZRgEMgA_1-g@mail.gmail.com \
--to=shakeelb@google.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.