All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm: verify page type before getting memcg from it
@ 2020-01-16 14:10 Yafang Shao
  2020-01-16 15:50 ` Michal Hocko
  0 siblings, 1 reply; 4+ messages in thread
From: Yafang Shao @ 2020-01-16 14:10 UTC (permalink / raw)
  To: dchinner, akpm; +Cc: linux-mm, Yafang Shao

Per disccusion with Dave[1], we always assume we only ever put objects from
memcg associated slab pages in the list_lru. In list_lru_from_kmem() it
calls memcg_from_slab_page() which makes no attempt to verify the page is
actually a slab page. But currently the binder coder (in
drivers/android/binder_alloc.c) stores normal pages in the list_lru, rather
than slab objects. The only reason the binder doesn't catch issue is that
the list_lru is not configured to be memcg aware.
In order to make it more stable, we should verify the page type before
getting memcg from it. In this patch, a new helper is introduced and the
old helper is modified. Now we have two helpers as bellow,

struct mem_cgroup *__memcg_from_slab_page(struct page *page);
struct mem_cgroup *memcg_from_slab_page(struct page *page);

The first helper is used when we are sure the page is a slab page and also
a head page, while the second helper is used when we are not sure the page
type.

[1].
https://lore.kernel.org/linux-mm/20200106213103.GJ23195@dread.disaster.area/

Suggested-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 mm/memcontrol.c |  7 ++-----
 mm/slab.h       | 24 +++++++++++++++++++++++-
 2 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 9bd4ea7..7658b8e 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -460,10 +460,7 @@ ino_t page_cgroup_ino(struct page *page)
 	unsigned long ino = 0;
 
 	rcu_read_lock();
-	if (PageSlab(page) && !PageTail(page))
-		memcg = memcg_from_slab_page(page);
-	else
-		memcg = READ_ONCE(page->mem_cgroup);
+	memcg = memcg_from_slab_page(page);
 	while (memcg && !(memcg->css.flags & CSS_ONLINE))
 		memcg = parent_mem_cgroup(memcg);
 	if (memcg)
@@ -748,7 +745,7 @@ void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val)
 	struct lruvec *lruvec;
 
 	rcu_read_lock();
-	memcg = memcg_from_slab_page(page);
+	memcg = __memcg_from_slab_page(page);
 
 	/* Untracked pages have no memcg, no lruvec. Update only the node */
 	if (!memcg || memcg == root_mem_cgroup) {
diff --git a/mm/slab.h b/mm/slab.h
index 7e94700..2444ae4 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -329,7 +329,7 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
  * The kmem_cache can be reparented asynchronously. The caller must ensure
  * the memcg lifetime, e.g. by taking rcu_read_lock() or cgroup_mutex.
  */
-static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
+static inline struct mem_cgroup *__memcg_from_slab_page(struct page *page)
 {
 	struct kmem_cache *s;
 
@@ -341,6 +341,23 @@ static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
 }
 
 /*
+ * If we are not sure whether the page can pass PageSlab() && !PageTail(),
+ * we should use this function. That's the difference between this helper
+ * and the above one.
+ */
+static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
+{
+	struct mem_cgroup *memcg;
+
+	if (PageSlab(page) && !PageTail(page))
+		memcg = __memcg_from_slab_page(page);
+	else
+		memcg = READ_ONCE(page->mem_cgroup);
+
+	return memcg;
+}
+
+/*
  * Charge the slab page belonging to the non-root kmem_cache.
  * Can be called for non-root kmem_caches only.
  */
@@ -438,6 +455,11 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
 	return s;
 }
 
+static inline struct mem_cgroup *__memcg_from_slab_page(struct page *page)
+{
+	return NULL;
+}
+
 static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
 {
 	return NULL;
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm: verify page type before getting memcg from it
  2020-01-16 14:10 [PATCH] mm: verify page type before getting memcg from it Yafang Shao
@ 2020-01-16 15:50 ` Michal Hocko
  2020-01-16 16:19   ` Roman Gushchin
  0 siblings, 1 reply; 4+ messages in thread
From: Michal Hocko @ 2020-01-16 15:50 UTC (permalink / raw)
  To: Yafang Shao; +Cc: dchinner, akpm, linux-mm, Roman Gushchin

[Cc Roman]

On Thu 16-01-20 09:10:11, Yafang Shao wrote:
> Per disccusion with Dave[1], we always assume we only ever put objects from
> memcg associated slab pages in the list_lru. In list_lru_from_kmem() it
> calls memcg_from_slab_page() which makes no attempt to verify the page is
> actually a slab page. But currently the binder coder (in
> drivers/android/binder_alloc.c) stores normal pages in the list_lru, rather
> than slab objects. The only reason the binder doesn't catch issue is that
> the list_lru is not configured to be memcg aware.
> In order to make it more stable, we should verify the page type before
> getting memcg from it. In this patch, a new helper is introduced and the
> old helper is modified. Now we have two helpers as bellow,
> 
> struct mem_cgroup *__memcg_from_slab_page(struct page *page);
> struct mem_cgroup *memcg_from_slab_page(struct page *page);
> 
> The first helper is used when we are sure the page is a slab page and also
> a head page, while the second helper is used when we are not sure the page
> type.
> 
> [1].
> https://lore.kernel.org/linux-mm/20200106213103.GJ23195@dread.disaster.area/
> 
> Suggested-by: Dave Chinner <david@fromorbit.com>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> ---
>  mm/memcontrol.c |  7 ++-----
>  mm/slab.h       | 24 +++++++++++++++++++++++-
>  2 files changed, 25 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 9bd4ea7..7658b8e 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -460,10 +460,7 @@ ino_t page_cgroup_ino(struct page *page)
>  	unsigned long ino = 0;
>  
>  	rcu_read_lock();
> -	if (PageSlab(page) && !PageTail(page))
> -		memcg = memcg_from_slab_page(page);
> -	else
> -		memcg = READ_ONCE(page->mem_cgroup);
> +	memcg = memcg_from_slab_page(page);
>  	while (memcg && !(memcg->css.flags & CSS_ONLINE))
>  		memcg = parent_mem_cgroup(memcg);
>  	if (memcg)
> @@ -748,7 +745,7 @@ void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val)
>  	struct lruvec *lruvec;
>  
>  	rcu_read_lock();
> -	memcg = memcg_from_slab_page(page);
> +	memcg = __memcg_from_slab_page(page);
>  
>  	/* Untracked pages have no memcg, no lruvec. Update only the node */
>  	if (!memcg || memcg == root_mem_cgroup) {
> diff --git a/mm/slab.h b/mm/slab.h
> index 7e94700..2444ae4 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -329,7 +329,7 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
>   * The kmem_cache can be reparented asynchronously. The caller must ensure
>   * the memcg lifetime, e.g. by taking rcu_read_lock() or cgroup_mutex.
>   */
> -static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
> +static inline struct mem_cgroup *__memcg_from_slab_page(struct page *page)
>  {
>  	struct kmem_cache *s;
>  
> @@ -341,6 +341,23 @@ static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
>  }
>  
>  /*
> + * If we are not sure whether the page can pass PageSlab() && !PageTail(),
> + * we should use this function. That's the difference between this helper
> + * and the above one.
> + */
> +static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
> +{
> +	struct mem_cgroup *memcg;
> +
> +	if (PageSlab(page) && !PageTail(page))
> +		memcg = __memcg_from_slab_page(page);
> +	else
> +		memcg = READ_ONCE(page->mem_cgroup);
> +
> +	return memcg;
> +}
> +
> +/*
>   * Charge the slab page belonging to the non-root kmem_cache.
>   * Can be called for non-root kmem_caches only.
>   */
> @@ -438,6 +455,11 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
>  	return s;
>  }
>  
> +static inline struct mem_cgroup *__memcg_from_slab_page(struct page *page)
> +{
> +	return NULL;
> +}
> +
>  static inline struct mem_cgroup *memcg_from_slab_page(struct page *page)
>  {
>  	return NULL;
> -- 
> 1.8.3.1
> 

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm: verify page type before getting memcg from it
  2020-01-16 15:50 ` Michal Hocko
@ 2020-01-16 16:19   ` Roman Gushchin
  2020-01-17  1:14     ` Yafang Shao
  0 siblings, 1 reply; 4+ messages in thread
From: Roman Gushchin @ 2020-01-16 16:19 UTC (permalink / raw)
  To: Michal Hocko, Yafang Shao; +Cc: dchinner, akpm, linux-mm

On Thu, Jan 16, 2020 at 04:50:56PM +0100, Michal Hocko wrote:
> [Cc Roman]

Thanks!

> 
> On Thu 16-01-20 09:10:11, Yafang Shao wrote:
> > Per disccusion with Dave[1], we always assume we only ever put objects from
> > memcg associated slab pages in the list_lru. In list_lru_from_kmem() it
> > calls memcg_from_slab_page() which makes no attempt to verify the page is
> > actually a slab page. But currently the binder coder (in
> > drivers/android/binder_alloc.c) stores normal pages in the list_lru, rather
> > than slab objects. The only reason the binder doesn't catch issue is that
> > the list_lru is not configured to be memcg aware.
> > In order to make it more stable, we should verify the page type before
> > getting memcg from it. In this patch, a new helper is introduced and the
> > old helper is modified. Now we have two helpers as bellow,
> > 
> > struct mem_cgroup *__memcg_from_slab_page(struct page *page);
> > struct mem_cgroup *memcg_from_slab_page(struct page *page);
> > 
> > The first helper is used when we are sure the page is a slab page and also
> > a head page, while the second helper is used when we are not sure the page
> > type.
> > 
> > [1].
> > https://lore.kernel.org/linux-mm/20200106213103.GJ23195@dread.disaster.area/
> > 
> > Suggested-by: Dave Chinner <david@fromorbit.com>
> > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>

Hello Yafang!

I actually have something similar in my patch queue, but I'm adding
a helper which takes a kernel pointer rather than a page:
  struct mem_cgroup *mem_cgroup_from_obj(void *p);

Will it work for you? If so, I can send it separately.

(I'm working on switching to per-object accounting of slab object,
so that slab pages will be shared between multiple cgroups. So it will
require a change like this).

Thanks!

--

From fc2b1ec53285edcb0017275019d60bd577bf64a9 Mon Sep 17 00:00:00 2001
From: Roman Gushchin <guro@fb.com>
Date: Thu, 2 Jan 2020 15:22:19 -0800
Subject: [PATCH] mm: memcg/slab: introduce mem_cgroup_from_obj()

Sometimes we need to get a memcg pointer from a charged kernel object.
The right way to do it depends on whether it's a proper slab object
or it's backed by raw pages (e.g. it's a vmalloc alloction). In the
first case the kmem_cache->memcg_params.memcg indirection should be
used, however in the the second case it's just page->mem_cgroup.

To simplify this task and hide these implementation details let's
introduce the mem_cgroup_from_obj() helper, which takes a pointer
to any kernel object and returns a valid memcg pointer or NULL.

The caller is still responsible to ensure that the returned memcg
isn't going away underneath: take the rcu read lock, cgroup mutex etc.

mem_cgroup_from_kmem() defined in mm/list_lru.c is now obsolete
and can be removed.

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 include/linux/memcontrol.h |  7 +++++++
 mm/list_lru.c              | 12 +-----------
 mm/memcontrol.c            | 32 +++++++++++++++++++++++++++++---
 3 files changed, 37 insertions(+), 14 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index c372bed6be80..0f6f8e18029e 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -420,6 +420,8 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *);
 
 struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);
 
+struct mem_cgroup *mem_cgroup_from_obj(void *p);
+
 struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
 
 struct mem_cgroup *get_mem_cgroup_from_page(struct page *page);
@@ -912,6 +914,11 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
 	return true;
 }
 
+static inline struct mem_cgroup *mem_cgroup_from_obj(void *p)
+{
+	return NULL;
+}
+
 static inline struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
 {
 	return NULL;
diff --git a/mm/list_lru.c b/mm/list_lru.c
index 0f1f6b06b7f3..8de5e3784ee4 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -57,16 +57,6 @@ list_lru_from_memcg_idx(struct list_lru_node *nlru, int idx)
 	return &nlru->lru;
 }
 
-static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
-{
-	struct page *page;
-
-	if (!memcg_kmem_enabled())
-		return NULL;
-	page = virt_to_head_page(ptr);
-	return memcg_from_slab_page(page);
-}
-
 static inline struct list_lru_one *
 list_lru_from_kmem(struct list_lru_node *nlru, void *ptr,
 		   struct mem_cgroup **memcg_ptr)
@@ -77,7 +67,7 @@ list_lru_from_kmem(struct list_lru_node *nlru, void *ptr,
 	if (!nlru->memcg_lrus)
 		goto out;
 
-	memcg = mem_cgroup_from_kmem(ptr);
+	memcg = mem_cgroup_from_obj(ptr);
 	if (!memcg)
 		goto out;
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6e1ee8577ecf..99d6fe9d7026 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -757,13 +757,12 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
 
 void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val)
 {
-	struct page *page = virt_to_head_page(p);
-	pg_data_t *pgdat = page_pgdat(page);
+	pg_data_t *pgdat = page_pgdat(virt_to_page(p));
 	struct mem_cgroup *memcg;
 	struct lruvec *lruvec;
 
 	rcu_read_lock();
-	memcg = memcg_from_slab_page(page);
+	memcg = mem_cgroup_from_obj(p);
 
 	/* Untracked pages have no memcg, no lruvec. Update only the node */
 	if (!memcg || memcg == root_mem_cgroup) {
@@ -2636,6 +2635,33 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg,
 		unlock_page_lru(page, isolated);
 }
 
+/*
+ * Returns a pointer to the memory cgroup to which the kernel object is charged.
+ *
+ * The caller must ensure the memcg lifetime, e.g. by owning a charged object,
+ * taking rcu_read_lock() or cgroup_mutex.
+ */
+struct mem_cgroup *mem_cgroup_from_obj(void *p)
+{
+	struct page *page;
+
+	if (mem_cgroup_disabled())
+		return NULL;
+
+	page = virt_to_head_page(p);
+
+	/*
+	 * Slab pages don't have page->mem_cgroup set because corresponding
+	 * kmem caches can be reparented during the lifetime. That's why
+	 * cache->memcg_params.memcg pointer should be used instead.
+	 */
+	if (PageSlab(page))
+		return memcg_from_slab_page(page);
+
+	/* All other pages use page->mem_cgroup */
+	return page->mem_cgroup;
+}
+
 #ifdef CONFIG_MEMCG_KMEM
 static int memcg_alloc_cache_id(void)
 {
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm: verify page type before getting memcg from it
  2020-01-16 16:19   ` Roman Gushchin
@ 2020-01-17  1:14     ` Yafang Shao
  0 siblings, 0 replies; 4+ messages in thread
From: Yafang Shao @ 2020-01-17  1:14 UTC (permalink / raw)
  To: Roman Gushchin; +Cc: Michal Hocko, dchinner, akpm, linux-mm

On Fri, Jan 17, 2020 at 12:19 AM Roman Gushchin <guro@fb.com> wrote:
>
> On Thu, Jan 16, 2020 at 04:50:56PM +0100, Michal Hocko wrote:
> > [Cc Roman]
>
> Thanks!
>
> >
> > On Thu 16-01-20 09:10:11, Yafang Shao wrote:
> > > Per disccusion with Dave[1], we always assume we only ever put objects from
> > > memcg associated slab pages in the list_lru. In list_lru_from_kmem() it
> > > calls memcg_from_slab_page() which makes no attempt to verify the page is
> > > actually a slab page. But currently the binder coder (in
> > > drivers/android/binder_alloc.c) stores normal pages in the list_lru, rather
> > > than slab objects. The only reason the binder doesn't catch issue is that
> > > the list_lru is not configured to be memcg aware.
> > > In order to make it more stable, we should verify the page type before
> > > getting memcg from it. In this patch, a new helper is introduced and the
> > > old helper is modified. Now we have two helpers as bellow,
> > >
> > > struct mem_cgroup *__memcg_from_slab_page(struct page *page);
> > > struct mem_cgroup *memcg_from_slab_page(struct page *page);
> > >
> > > The first helper is used when we are sure the page is a slab page and also
> > > a head page, while the second helper is used when we are not sure the page
> > > type.
> > >
> > > [1].
> > > https://lore.kernel.org/linux-mm/20200106213103.GJ23195@dread.disaster.area/
> > >
> > > Suggested-by: Dave Chinner <david@fromorbit.com>
> > > Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
>
> Hello Yafang!
>
> I actually have something similar in my patch queue, but I'm adding
> a helper which takes a kernel pointer rather than a page:
>   struct mem_cgroup *mem_cgroup_from_obj(void *p);
>
> Will it work for you? If so, I can send it separately.
>

Yes, it fixes the issue as well. Pls. send it separately.

> (I'm working on switching to per-object accounting of slab object,
> so that slab pages will be shared between multiple cgroups. So it will
> require a change like this).
>
> Thanks!
>
> --
>
> From fc2b1ec53285edcb0017275019d60bd577bf64a9 Mon Sep 17 00:00:00 2001
> From: Roman Gushchin <guro@fb.com>
> Date: Thu, 2 Jan 2020 15:22:19 -0800
> Subject: [PATCH] mm: memcg/slab: introduce mem_cgroup_from_obj()
>
> Sometimes we need to get a memcg pointer from a charged kernel object.
> The right way to do it depends on whether it's a proper slab object
> or it's backed by raw pages (e.g. it's a vmalloc alloction). In the
> first case the kmem_cache->memcg_params.memcg indirection should be
> used, however in the the second case it's just page->mem_cgroup.
>
> To simplify this task and hide these implementation details let's
> introduce the mem_cgroup_from_obj() helper, which takes a pointer
> to any kernel object and returns a valid memcg pointer or NULL.
>
> The caller is still responsible to ensure that the returned memcg
> isn't going away underneath: take the rcu read lock, cgroup mutex etc.
>
> mem_cgroup_from_kmem() defined in mm/list_lru.c is now obsolete
> and can be removed.
>
> Signed-off-by: Roman Gushchin <guro@fb.com>

Acked-by: Yafang Shao <laoar.shao@gmail.com>

> ---
>  include/linux/memcontrol.h |  7 +++++++
>  mm/list_lru.c              | 12 +-----------
>  mm/memcontrol.c            | 32 +++++++++++++++++++++++++++++---
>  3 files changed, 37 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index c372bed6be80..0f6f8e18029e 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -420,6 +420,8 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *);
>
>  struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);
>
> +struct mem_cgroup *mem_cgroup_from_obj(void *p);
> +
>  struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
>
>  struct mem_cgroup *get_mem_cgroup_from_page(struct page *page);
> @@ -912,6 +914,11 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
>         return true;
>  }
>
> +static inline struct mem_cgroup *mem_cgroup_from_obj(void *p)
> +{
> +       return NULL;
> +}
> +
>  static inline struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
>  {
>         return NULL;
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 0f1f6b06b7f3..8de5e3784ee4 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -57,16 +57,6 @@ list_lru_from_memcg_idx(struct list_lru_node *nlru, int idx)
>         return &nlru->lru;
>  }
>
> -static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
> -{
> -       struct page *page;
> -
> -       if (!memcg_kmem_enabled())
> -               return NULL;
> -       page = virt_to_head_page(ptr);
> -       return memcg_from_slab_page(page);
> -}
> -
>  static inline struct list_lru_one *
>  list_lru_from_kmem(struct list_lru_node *nlru, void *ptr,
>                    struct mem_cgroup **memcg_ptr)
> @@ -77,7 +67,7 @@ list_lru_from_kmem(struct list_lru_node *nlru, void *ptr,
>         if (!nlru->memcg_lrus)
>                 goto out;
>
> -       memcg = mem_cgroup_from_kmem(ptr);
> +       memcg = mem_cgroup_from_obj(ptr);
>         if (!memcg)
>                 goto out;
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 6e1ee8577ecf..99d6fe9d7026 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -757,13 +757,12 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
>
>  void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val)
>  {
> -       struct page *page = virt_to_head_page(p);
> -       pg_data_t *pgdat = page_pgdat(page);
> +       pg_data_t *pgdat = page_pgdat(virt_to_page(p));
>         struct mem_cgroup *memcg;
>         struct lruvec *lruvec;
>
>         rcu_read_lock();
> -       memcg = memcg_from_slab_page(page);
> +       memcg = mem_cgroup_from_obj(p);
>
>         /* Untracked pages have no memcg, no lruvec. Update only the node */
>         if (!memcg || memcg == root_mem_cgroup) {
> @@ -2636,6 +2635,33 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg,
>                 unlock_page_lru(page, isolated);
>  }
>
> +/*
> + * Returns a pointer to the memory cgroup to which the kernel object is charged.
> + *
> + * The caller must ensure the memcg lifetime, e.g. by owning a charged object,
> + * taking rcu_read_lock() or cgroup_mutex.
> + */
> +struct mem_cgroup *mem_cgroup_from_obj(void *p)
> +{
> +       struct page *page;
> +
> +       if (mem_cgroup_disabled())
> +               return NULL;
> +
> +       page = virt_to_head_page(p);
> +
> +       /*
> +        * Slab pages don't have page->mem_cgroup set because corresponding
> +        * kmem caches can be reparented during the lifetime. That's why
> +        * cache->memcg_params.memcg pointer should be used instead.
> +        */
> +       if (PageSlab(page))
> +               return memcg_from_slab_page(page);
> +
> +       /* All other pages use page->mem_cgroup */
> +       return page->mem_cgroup;
> +}
> +
>  #ifdef CONFIG_MEMCG_KMEM
>  static int memcg_alloc_cache_id(void)
>  {
> --
> 2.21.1
>


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-01-17  1:15 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-16 14:10 [PATCH] mm: verify page type before getting memcg from it Yafang Shao
2020-01-16 15:50 ` Michal Hocko
2020-01-16 16:19   ` Roman Gushchin
2020-01-17  1:14     ` Yafang Shao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.