linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeelb@google.com>
To: Roman Gushchin <guro@fb.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Christoph Lameter <cl@linux.com>,
	 Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>, Linux MM <linux-mm@kvack.org>,
	 Vlastimil Babka <vbabka@suse.cz>,
	Kernel Team <kernel-team@fb.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v6 07/19] mm: memcg/slab: allocate obj_cgroups for non-root slab pages
Date: Fri, 19 Jun 2020 17:31:44 -0700	[thread overview]
Message-ID: <CALvZod7ny6yVMj3Wy42gGy6NH=KdGbqoFStQ4am_PG2Fwo1fPA@mail.gmail.com> (raw)
In-Reply-To: <20200620002513.GD237539@carbon.dhcp.thefacebook.com>

On Fri, Jun 19, 2020 at 5:25 PM Roman Gushchin <guro@fb.com> wrote:
>
> On Fri, Jun 19, 2020 at 09:36:16AM -0700, Shakeel Butt wrote:
> > On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin <guro@fb.com> wrote:
> > >
> > > Allocate and release memory to store obj_cgroup pointers for each
> > > non-root slab page. Reuse page->mem_cgroup pointer to store a pointer
> > > to the allocated space.
> > >
> > > To distinguish between obj_cgroups and memcg pointers in case
> > > when it's not obvious which one is used (as in page_cgroup_ino()),
> > > let's always set the lowest bit in the obj_cgroup case.
> > >
> >
> > I think the commit message should talk about the potential overhead
> > (i.e an extra pointer for each object) along with the justifications
> > (i.e. less internal fragmentation and potentially more savings than
> > the overhead).
>
> How about adding the following chunk? I don't like forward links in
> commit messages, so maybe putting it into the cover letter?
>
> This commit temporarily increases the memory footprint of the kernel memory
> accounting. To store obj_cgroup pointers we'll need a place for an
> objcg_pointer for each allocated object. However, the following patches
> in the series will enable sharing of slab pages between memory cgroups,
> which will dramatically increase the total slab utilization. And the final
> memory footprint will be significantly smaller than before.
>

This looks good to me.

> >
> > > Signed-off-by: Roman Gushchin <guro@fb.com>
> > > Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> > > ---
> > >  include/linux/mm_types.h |  5 +++-
> > >  include/linux/slab_def.h |  6 +++++
> > >  include/linux/slub_def.h |  5 ++++
> > >  mm/memcontrol.c          | 17 +++++++++++---
> > >  mm/slab.h                | 49 ++++++++++++++++++++++++++++++++++++++++
> > >  5 files changed, 78 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> > > index 64ede5f150dc..0277fbab7c93 100644
> > > --- a/include/linux/mm_types.h
> > > +++ b/include/linux/mm_types.h
> > > @@ -198,7 +198,10 @@ struct page {
> > >         atomic_t _refcount;
> > >
> > >  #ifdef CONFIG_MEMCG
> > > -       struct mem_cgroup *mem_cgroup;
> > > +       union {
> > > +               struct mem_cgroup *mem_cgroup;
> > > +               struct obj_cgroup **obj_cgroups;
> > > +       };
> > >  #endif
> > >
> > >         /*
> > > diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
> > > index abc7de77b988..ccda7b9669a5 100644
> > > --- a/include/linux/slab_def.h
> > > +++ b/include/linux/slab_def.h
> > > @@ -114,4 +114,10 @@ static inline unsigned int obj_to_index(const struct kmem_cache *cache,
> > >         return reciprocal_divide(offset, cache->reciprocal_buffer_size);
> > >  }
> > >
> > > +static inline int objs_per_slab_page(const struct kmem_cache *cache,
> > > +                                    const struct page *page)
> > > +{
> > > +       return cache->num;
> > > +}
> > > +
> > >  #endif /* _LINUX_SLAB_DEF_H */
> > > diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> > > index 30e91c83d401..f87302dcfe8c 100644
> > > --- a/include/linux/slub_def.h
> > > +++ b/include/linux/slub_def.h
> > > @@ -198,4 +198,9 @@ static inline unsigned int obj_to_index(const struct kmem_cache *cache,
> > >         return __obj_to_index(cache, page_address(page), obj);
> > >  }
> > >
> > > +static inline int objs_per_slab_page(const struct kmem_cache *cache,
> > > +                                    const struct page *page)
> > > +{
> > > +       return page->objects;
> > > +}
> > >  #endif /* _LINUX_SLUB_DEF_H */
> > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > index 7ff66275966c..2020c7542aa1 100644
> > > --- a/mm/memcontrol.c
> > > +++ b/mm/memcontrol.c
> > > @@ -569,10 +569,21 @@ ino_t page_cgroup_ino(struct page *page)
> > >         unsigned long ino = 0;
> > >
> > >         rcu_read_lock();
> > > -       if (PageSlab(page) && !PageTail(page))
> > > +       if (PageSlab(page) && !PageTail(page)) {
> > >                 memcg = memcg_from_slab_page(page);
> > > -       else
> > > -               memcg = READ_ONCE(page->mem_cgroup);
> > > +       } else {
> > > +               memcg = page->mem_cgroup;
> > > +
> > > +               /*
> > > +                * The lowest bit set means that memcg isn't a valid
> > > +                * memcg pointer, but a obj_cgroups pointer.
> > > +                * In this case the page is shared and doesn't belong
> > > +                * to any specific memory cgroup.
> > > +                */
> > > +               if ((unsigned long) memcg & 0x1UL)
> > > +                       memcg = NULL;
> > > +       }
> > > +
> > >         while (memcg && !(memcg->css.flags & CSS_ONLINE))
> > >                 memcg = parent_mem_cgroup(memcg);
> > >         if (memcg)
> > > diff --git a/mm/slab.h b/mm/slab.h
> > > index 8a574d9361c1..a1633ea15fbf 100644
> > > --- a/mm/slab.h
> > > +++ b/mm/slab.h
> > > @@ -319,6 +319,18 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
> > >         return s->memcg_params.root_cache;
> > >  }
> > >
> > > +static inline struct obj_cgroup **page_obj_cgroups(struct page *page)
> > > +{
> > > +       /*
> > > +        * page->mem_cgroup and page->obj_cgroups are sharing the same
> > > +        * space. To distinguish between them in case we don't know for sure
> > > +        * that the page is a slab page (e.g. page_cgroup_ino()), let's
> > > +        * always set the lowest bit of obj_cgroups.
> > > +        */
> > > +       return (struct obj_cgroup **)
> > > +               ((unsigned long)page->obj_cgroups & ~0x1UL);
> > > +}
> > > +
> > >  /*
> > >   * Expects a pointer to a slab page. Please note, that PageSlab() check
> > >   * isn't sufficient, as it returns true also for tail compound slab pages,
> > > @@ -406,6 +418,26 @@ static __always_inline void memcg_uncharge_slab(struct page *page, int order,
> > >         percpu_ref_put_many(&s->memcg_params.refcnt, nr_pages);
> > >  }
> > >
> > > +static inline int memcg_alloc_page_obj_cgroups(struct page *page,
> > > +                                              struct kmem_cache *s, gfp_t gfp)
> > > +{
> > > +       unsigned int objects = objs_per_slab_page(s, page);
> > > +       void *vec;
> > > +
> > > +       vec = kcalloc(objects, sizeof(struct obj_cgroup *), gfp);
> >
> > Should the above allocation be on the same node as the page?
>
> Yeah, it's a clever idea. The following patch should do the trick.
> Andrew, can you, please, squash this in?
>
> Thank you!
>
>

Reviewed-by: Shakeel Butt <shakeelb@google.com>

> diff --git a/mm/slab.h b/mm/slab.h
> index 0a31600a0f5c..2a036eefbd7e 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -233,7 +233,8 @@ static inline int memcg_alloc_page_obj_cgroups(struct page *page,
>         unsigned int objects = objs_per_slab_page(s, page);
>         void *vec;
>
> -       vec = kcalloc(objects, sizeof(struct obj_cgroup *), gfp);
> +       vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp,
> +                          page_to_nid(page));
>         if (!vec)
>                 return -ENOMEM;
>


  reply	other threads:[~2020-06-20  0:31 UTC|newest]

Thread overview: 91+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-08 23:06 [PATCH v6 00/19] The new cgroup slab memory controller Roman Gushchin
2020-06-08 23:06 ` [PATCH v6 01/19] mm: memcg: factor out memcg- and lruvec-level changes out of __mod_lruvec_state() Roman Gushchin
2020-06-17  1:52   ` Shakeel Butt
2020-06-17  2:50     ` Roman Gushchin
2020-06-17  2:59       ` Shakeel Butt
2020-06-17  3:19         ` Roman Gushchin
2020-06-08 23:06 ` [PATCH v6 02/19] mm: memcg: prepare for byte-sized vmstat items Roman Gushchin
2020-06-17  2:57   ` Shakeel Butt
2020-06-17  3:19     ` Roman Gushchin
2020-06-17 15:55   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 03/19] mm: memcg: convert vmstat slab counters to bytes Roman Gushchin
2020-06-17  3:03   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 04/19] mm: slub: implement SLUB version of obj_to_index() Roman Gushchin
2020-06-17  3:08   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 05/19] mm: memcontrol: decouple reference counting from page accounting Roman Gushchin
2020-06-18  0:47   ` Shakeel Butt
2020-06-18 14:55   ` Shakeel Butt
2020-06-18 19:51     ` Roman Gushchin
2020-06-19  1:08     ` Roman Gushchin
2020-06-19  1:18       ` Shakeel Butt
2020-06-19  1:31   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 06/19] mm: memcg/slab: obj_cgroup API Roman Gushchin
2020-06-19 15:42   ` Shakeel Butt
2020-06-19 21:38     ` Roman Gushchin
2020-06-19 22:16       ` Shakeel Butt
2020-06-19 22:52         ` Roman Gushchin
2020-06-20 22:50       ` Andrew Morton
2020-06-08 23:06 ` [PATCH v6 07/19] mm: memcg/slab: allocate obj_cgroups for non-root slab pages Roman Gushchin
2020-06-19 16:36   ` Shakeel Butt
2020-06-20  0:25     ` Roman Gushchin
2020-06-20  0:31       ` Shakeel Butt [this message]
2020-06-08 23:06 ` [PATCH v6 08/19] mm: memcg/slab: save obj_cgroup for non-root slab objects Roman Gushchin
2020-06-20  0:16   ` Shakeel Butt
2020-06-20  1:19     ` Roman Gushchin
2020-06-08 23:06 ` [PATCH v6 09/19] mm: memcg/slab: charge individual slab objects instead of pages Roman Gushchin
2020-06-20  0:54   ` Shakeel Butt
2020-06-20  1:29     ` Roman Gushchin
2020-06-08 23:06 ` [PATCH v6 10/19] mm: memcg/slab: deprecate memory.kmem.slabinfo Roman Gushchin
2020-06-22 17:12   ` Shakeel Butt
2020-06-22 18:01     ` Roman Gushchin
2020-06-22 18:09       ` Shakeel Butt
2020-06-22 18:25         ` Roman Gushchin
2020-06-22 18:38           ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 11/19] mm: memcg/slab: move memcg_kmem_bypass() to memcontrol.h Roman Gushchin
2020-06-20  1:19   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 12/19] mm: memcg/slab: use a single set of kmem_caches for all accounted allocations Roman Gushchin
2020-06-22 16:56   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 13/19] mm: memcg/slab: simplify memcg cache creation Roman Gushchin
2020-06-22 17:29   ` Shakeel Butt
2020-06-22 17:40     ` Roman Gushchin
2020-06-22 18:03       ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 14/19] mm: memcg/slab: remove memcg_kmem_get_cache() Roman Gushchin
2020-06-22 18:42   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 15/19] mm: memcg/slab: deprecate slab_root_caches Roman Gushchin
2020-06-22 17:36   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 16/19] mm: memcg/slab: remove redundant check in memcg_accumulate_slabinfo() Roman Gushchin
2020-06-22 17:32   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 17/19] mm: memcg/slab: use a single set of kmem_caches for all allocations Roman Gushchin
2020-06-17 23:35   ` Andrew Morton
2020-06-18  0:35     ` Roman Gushchin
2020-06-18  7:33       ` Vlastimil Babka
2020-06-18 19:54         ` Roman Gushchin
2020-06-22 19:21   ` Shakeel Butt
2020-06-22 20:37     ` Roman Gushchin
2020-06-22 21:04       ` Shakeel Butt
2020-06-22 21:13         ` Roman Gushchin
2020-06-22 21:28           ` Shakeel Butt
2020-06-22 21:58             ` Roman Gushchin
2020-06-22 22:05               ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 18/19] kselftests: cgroup: add kernel memory accounting tests Roman Gushchin
2020-06-08 23:06 ` [PATCH v6 19/19] tools/cgroup: add memcg_slabinfo.py tool Roman Gushchin
2020-06-17  1:46 ` [PATCH v6 00/19] The new cgroup slab memory controller Shakeel Butt
2020-06-17  2:41   ` Roman Gushchin
2020-06-17  3:05     ` Shakeel Butt
2020-06-17  3:32       ` Roman Gushchin
2020-06-17 11:24         ` Vlastimil Babka
2020-06-17 14:31           ` Mel Gorman
2020-06-20  0:57             ` Roman Gushchin
2020-06-18  1:29           ` Roman Gushchin
2020-06-18  8:43             ` Jesper Dangaard Brouer
2020-06-18  9:31               ` Jesper Dangaard Brouer
2020-06-19  1:30                 ` Roman Gushchin
2020-06-19  8:32                   ` Jesper Dangaard Brouer
2020-06-19  1:27               ` Roman Gushchin
2020-06-19  9:39                 ` Jesper Dangaard Brouer
2020-06-19 18:47                   ` Roman Gushchin
2020-06-18  9:27 ` Mike Rapoport
2020-06-18 20:43   ` Roman Gushchin
2020-06-21 22:57 ` Qian Cai
2020-06-21 23:34   ` Roman Gushchin
     [not found]     ` <7DEA68BC-3B9A-46FA-8103-AE6B72324591@lca.pw>
2020-06-22  3:07       ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALvZod7ny6yVMj3Wy42gGy6NH=KdGbqoFStQ4am_PG2Fwo1fPA@mail.gmail.com' \
    --to=shakeelb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).