linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeelb@google.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Roman Gushchin <guro@fb.com>, Michal Hocko <mhocko@kernel.org>,
	Linux MM <linux-mm@kvack.org>, Cgroups <cgroups@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] memcg: optimize memory.numa_stat like memory.stat
Date: Thu, 23 Apr 2020 15:59:41 -0700	[thread overview]
Message-ID: <CALvZod4R68wNgzOF9dN=i6LwyUYMBhvM7SXaRJGW9Wn_SmeGGA@mail.gmail.com> (raw)
In-Reply-To: <CALvZod7W-Qwa4BRKW0_Ts5f68fwkcqD72SF_4NqZRgEMgA_1-g@mail.gmail.com>

On Thu, Mar 5, 2020 at 8:54 PM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Thu, Mar 5, 2020 at 8:41 PM Andrew Morton <akpm@linux-foundation.org> wrote:
> >
> > On Tue,  3 Mar 2020 18:20:58 -0800 Shakeel Butt <shakeelb@google.com> wrote:
> >
> > > Currently reading memory.numa_stat traverses the underlying memcg tree
> > > multiple times to accumulate the stats to present the hierarchical view
> > > of the memcg tree. However the kernel already maintains the hierarchical
> > > view of the stats and use it in memory.stat. Just use the same mechanism
> > > in memory.numa_stat as well.
> > >
> > > I ran a simple benchmark which reads root_mem_cgroup's memory.numa_stat
> > > file in the presense of 10000 memcgs. The results are:
> > >
> > > Without the patch:
> > > $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null
> > >
> > > real    0m0.700s
> > > user    0m0.001s
> > > sys     0m0.697s
> > >
> > > With the patch:
> > > $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null
> > >
> > > real    0m0.001s
> > > user    0m0.001s
> > > sys     0m0.000s
> > >
> >
> > Can't you do better than that ;)
> >
> > >
> > > +     page_state = tree ? lruvec_page_state : lruvec_page_state_local;
> > > ...
> > >
> > > +     page_state = tree ? memcg_page_state : memcg_page_state_local;
> > >
> >
> > All four of these functions are inlined.  Taking their address in this
> > fashion will force the compiler to generate out-of-line copies.
> >
> > If we do it the uglier-and-maybe-a-bit-slower way:
> >
> > --- a/mm/memcontrol.c~memcg-optimize-memorynuma_stat-like-memorystat-fix
> > +++ a/mm/memcontrol.c
> > @@ -3658,17 +3658,16 @@ static unsigned long mem_cgroup_node_nr_
> >         struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
> >         unsigned long nr = 0;
> >         enum lru_list lru;
> > -       unsigned long (*page_state)(struct lruvec *lruvec,
> > -                                   enum node_stat_item idx);
> >
> >         VM_BUG_ON((unsigned)nid >= nr_node_ids);
> >
> > -       page_state = tree ? lruvec_page_state : lruvec_page_state_local;
> > -
> >         for_each_lru(lru) {
> >                 if (!(BIT(lru) & lru_mask))
> >                         continue;
> > -               nr += page_state(lruvec, NR_LRU_BASE + lru);
> > +               if (tree)
> > +                       nr += lruvec_page_state(lruvec, NR_LRU_BASE + lru);
> > +               else
> > +                       nr += lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
> >         }
> >         return nr;
> >  }
> > @@ -3679,14 +3678,14 @@ static unsigned long mem_cgroup_nr_lru_p
> >  {
> >         unsigned long nr = 0;
> >         enum lru_list lru;
> > -       unsigned long (*page_state)(struct mem_cgroup *memcg, int idx);
> > -
> > -       page_state = tree ? memcg_page_state : memcg_page_state_local;
> >
> >         for_each_lru(lru) {
> >                 if (!(BIT(lru) & lru_mask))
> >                         continue;
> > -               nr += page_state(memcg, NR_LRU_BASE + lru);
> > +               if (tree)
> > +                       nr += memcg_page_state(memcg, NR_LRU_BASE + lru);
> > +               else
> > +                       nr += memcg_page_state_local(memcg, NR_LRU_BASE + lru);
> >         }
> >         return nr;
> >  }
> >
> > Then we get:
> >
> >                      text    data     bss     dec     hex filename
> > now:               106705   35641    1024  143370   2300a mm/memcontrol.o
> > shakeel:           107111   35657    1024  143792   231b0 mm/memcontrol.o
> > shakeel+the-above: 106805   35657    1024  143486   2307e mm/memcontrol.o
> >
> > Which do we prefer?  The 100-byte patch or the 406-byte patch?
>
> I would go with the 100-byte one. The for-loop is just 5 iteration, so
> doing a check in each iteration should not be an issue.
>

Andrew, anything more needed for this patch to be merged?

Shakeel

  reply	other threads:[~2020-04-23 22:59 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-04  2:20 Shakeel Butt
2020-03-06  4:41 ` Andrew Morton
2020-03-06  4:54   ` Shakeel Butt
2020-04-23 22:59     ` Shakeel Butt [this message]
2020-04-23 23:10       ` Andrew Morton
2020-04-24  2:38         ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALvZod4R68wNgzOF9dN=i6LwyUYMBhvM7SXaRJGW9Wn_SmeGGA@mail.gmail.com' \
    --to=shakeelb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --subject='Re: [PATCH] memcg: optimize memory.numa_stat like memory.stat' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).