Linux-mm Archive on lore.kernel.org
 help / color / Atom feed
From: Yafang Shao <laoar.shao@gmail.com>
To: Roman Gushchin <guro@fb.com>
Cc: Michal Hocko <mhocko@kernel.org>,
	"dchinner@redhat.com" <dchinner@redhat.com>,
	 "akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH] mm: verify page type before getting memcg from it
Date: Fri, 17 Jan 2020 09:14:35 +0800
Message-ID: <CALOAHbBVaQO1=nFgtYg67AmLidOvg9YKKgp3DbpgPeZ9aK2wCA@mail.gmail.com> (raw)
In-Reply-To: <20200116161904.GA14228@tower.DHCP.thefacebook.com>

On Fri, Jan 17, 2020 at 12:19 AM Roman Gushchin <guro@fb.com> wrote:
>
> On Thu, Jan 16, 2020 at 04:50:56PM +0100, Michal Hocko wrote:
> > [Cc Roman]
>
> Thanks!
>
> >
> > On Thu 16-01-20 09:10:11, Yafang Shao wrote:
> > > Per disccusion with Dave[1], we always assume we only ever put objects from
> > > memcg associated slab pages in the list_lru. In list_lru_from_kmem() it
> > > calls memcg_from_slab_page() which makes no attempt to verify the page is
> > > actually a slab page. But currently the binder coder (in
> > > drivers/android/binder_alloc.c) stores normal pages in the list_lru, rather
> > > than slab objects. The only reason the binder doesn't catch issue is that
> > > the list_lru is not configured to be memcg aware.
> > > In order to make it more stable, we should verify the page type before
> > > getting memcg from it. In this patch, a new helper is introduced and the
> > > old helper is modified. Now we have two helpers as bellow,
> > >
> > > struct mem_cgroup *__memcg_from_slab_page(struct page *page);
> > > struct mem_cgroup *memcg_from_slab_page(struct page *page);
> > >
> > > The first helper is used when we are sure the page is a slab page and also
> > > a head page, while the second helper is used when we are not sure the page
> > > type.
> > >
> > > [1].
> > > https://lore.kernel.org/linux-mm/20200106213103.GJ23195@dread.disaster.area/
> > >
> > > Suggested-by: Dave Chinner <david@fromorbit.com>
> > > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
>
> Hello Yafang!
>
> I actually have something similar in my patch queue, but I'm adding
> a helper which takes a kernel pointer rather than a page:
>   struct mem_cgroup *mem_cgroup_from_obj(void *p);
>
> Will it work for you? If so, I can send it separately.
>

Yes, it fixes the issue as well. Pls. send it separately.

> (I'm working on switching to per-object accounting of slab object,
> so that slab pages will be shared between multiple cgroups. So it will
> require a change like this).
>
> Thanks!
>
> --
>
> From fc2b1ec53285edcb0017275019d60bd577bf64a9 Mon Sep 17 00:00:00 2001
> From: Roman Gushchin <guro@fb.com>
> Date: Thu, 2 Jan 2020 15:22:19 -0800
> Subject: [PATCH] mm: memcg/slab: introduce mem_cgroup_from_obj()
>
> Sometimes we need to get a memcg pointer from a charged kernel object.
> The right way to do it depends on whether it's a proper slab object
> or it's backed by raw pages (e.g. it's a vmalloc alloction). In the
> first case the kmem_cache->memcg_params.memcg indirection should be
> used, however in the the second case it's just page->mem_cgroup.
>
> To simplify this task and hide these implementation details let's
> introduce the mem_cgroup_from_obj() helper, which takes a pointer
> to any kernel object and returns a valid memcg pointer or NULL.
>
> The caller is still responsible to ensure that the returned memcg
> isn't going away underneath: take the rcu read lock, cgroup mutex etc.
>
> mem_cgroup_from_kmem() defined in mm/list_lru.c is now obsolete
> and can be removed.
>
> Signed-off-by: Roman Gushchin <guro@fb.com>

Acked-by: Yafang Shao <laoar.shao@gmail.com>

> ---
>  include/linux/memcontrol.h |  7 +++++++
>  mm/list_lru.c              | 12 +-----------
>  mm/memcontrol.c            | 32 +++++++++++++++++++++++++++++---
>  3 files changed, 37 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index c372bed6be80..0f6f8e18029e 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -420,6 +420,8 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *);
>
>  struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);
>
> +struct mem_cgroup *mem_cgroup_from_obj(void *p);
> +
>  struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
>
>  struct mem_cgroup *get_mem_cgroup_from_page(struct page *page);
> @@ -912,6 +914,11 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
>         return true;
>  }
>
> +static inline struct mem_cgroup *mem_cgroup_from_obj(void *p)
> +{
> +       return NULL;
> +}
> +
>  static inline struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
>  {
>         return NULL;
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 0f1f6b06b7f3..8de5e3784ee4 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -57,16 +57,6 @@ list_lru_from_memcg_idx(struct list_lru_node *nlru, int idx)
>         return &nlru->lru;
>  }
>
> -static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
> -{
> -       struct page *page;
> -
> -       if (!memcg_kmem_enabled())
> -               return NULL;
> -       page = virt_to_head_page(ptr);
> -       return memcg_from_slab_page(page);
> -}
> -
>  static inline struct list_lru_one *
>  list_lru_from_kmem(struct list_lru_node *nlru, void *ptr,
>                    struct mem_cgroup **memcg_ptr)
> @@ -77,7 +67,7 @@ list_lru_from_kmem(struct list_lru_node *nlru, void *ptr,
>         if (!nlru->memcg_lrus)
>                 goto out;
>
> -       memcg = mem_cgroup_from_kmem(ptr);
> +       memcg = mem_cgroup_from_obj(ptr);
>         if (!memcg)
>                 goto out;
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 6e1ee8577ecf..99d6fe9d7026 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -757,13 +757,12 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
>
>  void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val)
>  {
> -       struct page *page = virt_to_head_page(p);
> -       pg_data_t *pgdat = page_pgdat(page);
> +       pg_data_t *pgdat = page_pgdat(virt_to_page(p));
>         struct mem_cgroup *memcg;
>         struct lruvec *lruvec;
>
>         rcu_read_lock();
> -       memcg = memcg_from_slab_page(page);
> +       memcg = mem_cgroup_from_obj(p);
>
>         /* Untracked pages have no memcg, no lruvec. Update only the node */
>         if (!memcg || memcg == root_mem_cgroup) {
> @@ -2636,6 +2635,33 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg,
>                 unlock_page_lru(page, isolated);
>  }
>
> +/*
> + * Returns a pointer to the memory cgroup to which the kernel object is charged.
> + *
> + * The caller must ensure the memcg lifetime, e.g. by owning a charged object,
> + * taking rcu_read_lock() or cgroup_mutex.
> + */
> +struct mem_cgroup *mem_cgroup_from_obj(void *p)
> +{
> +       struct page *page;
> +
> +       if (mem_cgroup_disabled())
> +               return NULL;
> +
> +       page = virt_to_head_page(p);
> +
> +       /*
> +        * Slab pages don't have page->mem_cgroup set because corresponding
> +        * kmem caches can be reparented during the lifetime. That's why
> +        * cache->memcg_params.memcg pointer should be used instead.
> +        */
> +       if (PageSlab(page))
> +               return memcg_from_slab_page(page);
> +
> +       /* All other pages use page->mem_cgroup */
> +       return page->mem_cgroup;
> +}
> +
>  #ifdef CONFIG_MEMCG_KMEM
>  static int memcg_alloc_cache_id(void)
>  {
> --
> 2.21.1
>


      reply index

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-16 14:10 Yafang Shao
2020-01-16 15:50 ` Michal Hocko
2020-01-16 16:19   ` Roman Gushchin
2020-01-17  1:14     ` Yafang Shao [this message]

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALOAHbBVaQO1=nFgtYg67AmLidOvg9YKKgp3DbpgPeZ9aK2wCA@mail.gmail.com' \
    --to=laoar.shao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=dchinner@redhat.com \
    --cc=guro@fb.com \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-mm Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-mm/0 linux-mm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-mm linux-mm/ https://lore.kernel.org/linux-mm \
		linux-mm@kvack.org
	public-inbox-index linux-mm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kvack.linux-mm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git