All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v2] bpf: reparent bpf maps on memcg offlining
@ 2022-07-11 16:28 Roman Gushchin
  2022-07-12 21:48 ` Alexei Starovoitov
  2022-07-12 22:50 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 6+ messages in thread
From: Roman Gushchin @ 2022-07-11 16:28 UTC (permalink / raw)
  To: bpf; +Cc: Shakeel Butt, Alexei Starovoitov, linux-kernel, Roman Gushchin

The memory consumed by a mpf map is always accounted to the memory
cgroup of the process which created the map. The map can outlive
the memory cgroup if it's used by processes in other cgroups or
is pinned on bpffs. In this case the map pins the original cgroup
in the dying state.

For other types of objects (slab objects, non-slab kernel allocations,
percpu objects and recently LRU pages) there is a reparenting process
implemented: on cgroup offlining charged objects are getting
reassigned to the parent cgroup. Because all charges and statistics
are fully recursive it's a fairly cheap operation.

For efficiency and consistency with other types of objects, let's do
the same for bpf maps. Fortunately thanks to the objcg API, the
required changes are minimal.

Please, note that individual allocations (slabs, percpu and large
kmallocs) already have the reparenting mechanism. This commit adds
it to the saved map->memcg pointer by replacing it to map->objcg.
Because dying cgroups are not visible for a user and all charges are
recursive, this commit doesn't bring any behavior changes for a user.

v2:
  added a missing const qualifier

Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
---
 include/linux/bpf.h  |  2 +-
 kernel/bpf/syscall.c | 35 +++++++++++++++++++++++++++--------
 2 files changed, 28 insertions(+), 9 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 2b21f2a3452f..85a4db3e0536 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -221,7 +221,7 @@ struct bpf_map {
 	u32 btf_vmlinux_value_type_id;
 	struct btf *btf;
 #ifdef CONFIG_MEMCG_KMEM
-	struct mem_cgroup *memcg;
+	struct obj_cgroup *objcg;
 #endif
 	char name[BPF_OBJ_NAME_LEN];
 	struct bpf_map_off_arr *off_arr;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index ab688d85b2c6..ef60dbc21b17 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -419,35 +419,52 @@ void bpf_map_free_id(struct bpf_map *map, bool do_idr_lock)
 #ifdef CONFIG_MEMCG_KMEM
 static void bpf_map_save_memcg(struct bpf_map *map)
 {
-	map->memcg = get_mem_cgroup_from_mm(current->mm);
+	/* Currently if a map is created by a process belonging to the root
+	 * memory cgroup, get_obj_cgroup_from_current() will return NULL.
+	 * So we have to check map->objcg for being NULL each time it's
+	 * being used.
+	 */
+	map->objcg = get_obj_cgroup_from_current();
 }
 
 static void bpf_map_release_memcg(struct bpf_map *map)
 {
-	mem_cgroup_put(map->memcg);
+	if (map->objcg)
+		obj_cgroup_put(map->objcg);
+}
+
+static struct mem_cgroup *bpf_map_get_memcg(const struct bpf_map *map) {
+	if (map->objcg)
+		return get_mem_cgroup_from_objcg(map->objcg);
+
+	return root_mem_cgroup;
 }
 
 void *bpf_map_kmalloc_node(const struct bpf_map *map, size_t size, gfp_t flags,
 			   int node)
 {
-	struct mem_cgroup *old_memcg;
+	struct mem_cgroup *memcg, *old_memcg;
 	void *ptr;
 
-	old_memcg = set_active_memcg(map->memcg);
+	memcg = bpf_map_get_memcg(map);
+	old_memcg = set_active_memcg(memcg);
 	ptr = kmalloc_node(size, flags | __GFP_ACCOUNT, node);
 	set_active_memcg(old_memcg);
+	mem_cgroup_put(memcg);
 
 	return ptr;
 }
 
 void *bpf_map_kzalloc(const struct bpf_map *map, size_t size, gfp_t flags)
 {
-	struct mem_cgroup *old_memcg;
+	struct mem_cgroup *memcg, *old_memcg;
 	void *ptr;
 
-	old_memcg = set_active_memcg(map->memcg);
+	memcg = bpf_map_get_memcg(map);
+	old_memcg = set_active_memcg(memcg);
 	ptr = kzalloc(size, flags | __GFP_ACCOUNT);
 	set_active_memcg(old_memcg);
+	mem_cgroup_put(memcg);
 
 	return ptr;
 }
@@ -455,12 +472,14 @@ void *bpf_map_kzalloc(const struct bpf_map *map, size_t size, gfp_t flags)
 void __percpu *bpf_map_alloc_percpu(const struct bpf_map *map, size_t size,
 				    size_t align, gfp_t flags)
 {
-	struct mem_cgroup *old_memcg;
+	struct mem_cgroup *memcg, *old_memcg;
 	void __percpu *ptr;
 
-	old_memcg = set_active_memcg(map->memcg);
+	memcg = bpf_map_get_memcg(map);
+	old_memcg = set_active_memcg(memcg);
 	ptr = __alloc_percpu_gfp(size, align, flags | __GFP_ACCOUNT);
 	set_active_memcg(old_memcg);
+	mem_cgroup_put(memcg);
 
 	return ptr;
 }
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH bpf-next v2] bpf: reparent bpf maps on memcg offlining
  2022-07-11 16:28 [PATCH bpf-next v2] bpf: reparent bpf maps on memcg offlining Roman Gushchin
@ 2022-07-12 21:48 ` Alexei Starovoitov
  2022-07-12 22:11   ` Shakeel Butt
  2022-07-12 22:50 ` patchwork-bot+netdevbpf
  1 sibling, 1 reply; 6+ messages in thread
From: Alexei Starovoitov @ 2022-07-12 21:48 UTC (permalink / raw)
  To: Roman Gushchin; +Cc: bpf, Shakeel Butt, Alexei Starovoitov, LKML

On Mon, Jul 11, 2022 at 9:28 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> The memory consumed by a mpf map is always accounted to the memory
> cgroup of the process which created the map. The map can outlive
> the memory cgroup if it's used by processes in other cgroups or
> is pinned on bpffs. In this case the map pins the original cgroup
> in the dying state.
>
> For other types of objects (slab objects, non-slab kernel allocations,
> percpu objects and recently LRU pages) there is a reparenting process
> implemented: on cgroup offlining charged objects are getting
> reassigned to the parent cgroup. Because all charges and statistics
> are fully recursive it's a fairly cheap operation.
>
> For efficiency and consistency with other types of objects, let's do
> the same for bpf maps. Fortunately thanks to the objcg API, the
> required changes are minimal.
>
> Please, note that individual allocations (slabs, percpu and large
> kmallocs) already have the reparenting mechanism. This commit adds
> it to the saved map->memcg pointer by replacing it to map->objcg.
> Because dying cgroups are not visible for a user and all charges are
> recursive, this commit doesn't bring any behavior changes for a user.
>
> v2:
>   added a missing const qualifier
>
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> ---
>  include/linux/bpf.h  |  2 +-
>  kernel/bpf/syscall.c | 35 +++++++++++++++++++++++++++--------
>  2 files changed, 28 insertions(+), 9 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 2b21f2a3452f..85a4db3e0536 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -221,7 +221,7 @@ struct bpf_map {
>         u32 btf_vmlinux_value_type_id;
>         struct btf *btf;
>  #ifdef CONFIG_MEMCG_KMEM
> -       struct mem_cgroup *memcg;
> +       struct obj_cgroup *objcg;
>  #endif
>         char name[BPF_OBJ_NAME_LEN];
>         struct bpf_map_off_arr *off_arr;
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index ab688d85b2c6..ef60dbc21b17 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -419,35 +419,52 @@ void bpf_map_free_id(struct bpf_map *map, bool do_idr_lock)
>  #ifdef CONFIG_MEMCG_KMEM
>  static void bpf_map_save_memcg(struct bpf_map *map)
>  {
> -       map->memcg = get_mem_cgroup_from_mm(current->mm);
> +       /* Currently if a map is created by a process belonging to the root
> +        * memory cgroup, get_obj_cgroup_from_current() will return NULL.
> +        * So we have to check map->objcg for being NULL each time it's
> +        * being used.
> +        */
> +       map->objcg = get_obj_cgroup_from_current();
>  }
>
>  static void bpf_map_release_memcg(struct bpf_map *map)
>  {
> -       mem_cgroup_put(map->memcg);
> +       if (map->objcg)
> +               obj_cgroup_put(map->objcg);
> +}
> +
> +static struct mem_cgroup *bpf_map_get_memcg(const struct bpf_map *map) {
> +       if (map->objcg)
> +               return get_mem_cgroup_from_objcg(map->objcg);
> +
> +       return root_mem_cgroup;
>  }
>
>  void *bpf_map_kmalloc_node(const struct bpf_map *map, size_t size, gfp_t flags,
>                            int node)
>  {
> -       struct mem_cgroup *old_memcg;
> +       struct mem_cgroup *memcg, *old_memcg;
>         void *ptr;
>
> -       old_memcg = set_active_memcg(map->memcg);
> +       memcg = bpf_map_get_memcg(map);
> +       old_memcg = set_active_memcg(memcg);
>         ptr = kmalloc_node(size, flags | __GFP_ACCOUNT, node);
>         set_active_memcg(old_memcg);
> +       mem_cgroup_put(memcg);

Here we might css_put root_mem_cgroup.
Should we css_get it when returning or
it's marked as CSS_NO_REF ?
But mem_cgroup_alloc() doesn't seem to be doing that marking.
I'm lost at that code.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH bpf-next v2] bpf: reparent bpf maps on memcg offlining
  2022-07-12 21:48 ` Alexei Starovoitov
@ 2022-07-12 22:11   ` Shakeel Butt
  2022-07-12 22:15     ` Alexei Starovoitov
  0 siblings, 1 reply; 6+ messages in thread
From: Shakeel Butt @ 2022-07-12 22:11 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: Roman Gushchin, bpf, Alexei Starovoitov, LKML

On Tue, Jul 12, 2022 at 2:49 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Mon, Jul 11, 2022 at 9:28 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> >
> > The memory consumed by a mpf map is always accounted to the memory
> > cgroup of the process which created the map. The map can outlive
> > the memory cgroup if it's used by processes in other cgroups or
> > is pinned on bpffs. In this case the map pins the original cgroup
> > in the dying state.
> >
> > For other types of objects (slab objects, non-slab kernel allocations,
> > percpu objects and recently LRU pages) there is a reparenting process
> > implemented: on cgroup offlining charged objects are getting
> > reassigned to the parent cgroup. Because all charges and statistics
> > are fully recursive it's a fairly cheap operation.
> >
> > For efficiency and consistency with other types of objects, let's do
> > the same for bpf maps. Fortunately thanks to the objcg API, the
> > required changes are minimal.
> >
> > Please, note that individual allocations (slabs, percpu and large
> > kmallocs) already have the reparenting mechanism. This commit adds
> > it to the saved map->memcg pointer by replacing it to map->objcg.
> > Because dying cgroups are not visible for a user and all charges are
> > recursive, this commit doesn't bring any behavior changes for a user.
> >
> > v2:
> >   added a missing const qualifier
> >
> > Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> > Reviewed-by: Shakeel Butt <shakeelb@google.com>
> > ---
> >  include/linux/bpf.h  |  2 +-
> >  kernel/bpf/syscall.c | 35 +++++++++++++++++++++++++++--------
> >  2 files changed, 28 insertions(+), 9 deletions(-)
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index 2b21f2a3452f..85a4db3e0536 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -221,7 +221,7 @@ struct bpf_map {
> >         u32 btf_vmlinux_value_type_id;
> >         struct btf *btf;
> >  #ifdef CONFIG_MEMCG_KMEM
> > -       struct mem_cgroup *memcg;
> > +       struct obj_cgroup *objcg;
> >  #endif
> >         char name[BPF_OBJ_NAME_LEN];
> >         struct bpf_map_off_arr *off_arr;
> > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> > index ab688d85b2c6..ef60dbc21b17 100644
> > --- a/kernel/bpf/syscall.c
> > +++ b/kernel/bpf/syscall.c
> > @@ -419,35 +419,52 @@ void bpf_map_free_id(struct bpf_map *map, bool do_idr_lock)
> >  #ifdef CONFIG_MEMCG_KMEM
> >  static void bpf_map_save_memcg(struct bpf_map *map)
> >  {
> > -       map->memcg = get_mem_cgroup_from_mm(current->mm);
> > +       /* Currently if a map is created by a process belonging to the root
> > +        * memory cgroup, get_obj_cgroup_from_current() will return NULL.
> > +        * So we have to check map->objcg for being NULL each time it's
> > +        * being used.
> > +        */
> > +       map->objcg = get_obj_cgroup_from_current();
> >  }
> >
> >  static void bpf_map_release_memcg(struct bpf_map *map)
> >  {
> > -       mem_cgroup_put(map->memcg);
> > +       if (map->objcg)
> > +               obj_cgroup_put(map->objcg);
> > +}
> > +
> > +static struct mem_cgroup *bpf_map_get_memcg(const struct bpf_map *map) {
> > +       if (map->objcg)
> > +               return get_mem_cgroup_from_objcg(map->objcg);
> > +
> > +       return root_mem_cgroup;
> >  }
> >
> >  void *bpf_map_kmalloc_node(const struct bpf_map *map, size_t size, gfp_t flags,
> >                            int node)
> >  {
> > -       struct mem_cgroup *old_memcg;
> > +       struct mem_cgroup *memcg, *old_memcg;
> >         void *ptr;
> >
> > -       old_memcg = set_active_memcg(map->memcg);
> > +       memcg = bpf_map_get_memcg(map);
> > +       old_memcg = set_active_memcg(memcg);
> >         ptr = kmalloc_node(size, flags | __GFP_ACCOUNT, node);
> >         set_active_memcg(old_memcg);
> > +       mem_cgroup_put(memcg);
>
> Here we might css_put root_mem_cgroup.
> Should we css_get it when returning or
> it's marked as CSS_NO_REF ?
> But mem_cgroup_alloc() doesn't seem to be doing that marking.
> I'm lost at that code.

CSS_NO_REF is set for root_mem_cgroup in cgroup_init_subsys().

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH bpf-next v2] bpf: reparent bpf maps on memcg offlining
  2022-07-12 22:11   ` Shakeel Butt
@ 2022-07-12 22:15     ` Alexei Starovoitov
  2022-07-13  2:13       ` Roman Gushchin
  0 siblings, 1 reply; 6+ messages in thread
From: Alexei Starovoitov @ 2022-07-12 22:15 UTC (permalink / raw)
  To: Shakeel Butt; +Cc: Roman Gushchin, bpf, Alexei Starovoitov, LKML

On Tue, Jul 12, 2022 at 3:11 PM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Tue, Jul 12, 2022 at 2:49 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Mon, Jul 11, 2022 at 9:28 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > >
> > > The memory consumed by a mpf map is always accounted to the memory
> > > cgroup of the process which created the map. The map can outlive
> > > the memory cgroup if it's used by processes in other cgroups or
> > > is pinned on bpffs. In this case the map pins the original cgroup
> > > in the dying state.
> > >
> > > For other types of objects (slab objects, non-slab kernel allocations,
> > > percpu objects and recently LRU pages) there is a reparenting process
> > > implemented: on cgroup offlining charged objects are getting
> > > reassigned to the parent cgroup. Because all charges and statistics
> > > are fully recursive it's a fairly cheap operation.
> > >
> > > For efficiency and consistency with other types of objects, let's do
> > > the same for bpf maps. Fortunately thanks to the objcg API, the
> > > required changes are minimal.
> > >
> > > Please, note that individual allocations (slabs, percpu and large
> > > kmallocs) already have the reparenting mechanism. This commit adds
> > > it to the saved map->memcg pointer by replacing it to map->objcg.
> > > Because dying cgroups are not visible for a user and all charges are
> > > recursive, this commit doesn't bring any behavior changes for a user.
> > >
> > > v2:
> > >   added a missing const qualifier
> > >
> > > Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> > > Reviewed-by: Shakeel Butt <shakeelb@google.com>
> > > ---
> > >  include/linux/bpf.h  |  2 +-
> > >  kernel/bpf/syscall.c | 35 +++++++++++++++++++++++++++--------
> > >  2 files changed, 28 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > > index 2b21f2a3452f..85a4db3e0536 100644
> > > --- a/include/linux/bpf.h
> > > +++ b/include/linux/bpf.h
> > > @@ -221,7 +221,7 @@ struct bpf_map {
> > >         u32 btf_vmlinux_value_type_id;
> > >         struct btf *btf;
> > >  #ifdef CONFIG_MEMCG_KMEM
> > > -       struct mem_cgroup *memcg;
> > > +       struct obj_cgroup *objcg;
> > >  #endif
> > >         char name[BPF_OBJ_NAME_LEN];
> > >         struct bpf_map_off_arr *off_arr;
> > > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> > > index ab688d85b2c6..ef60dbc21b17 100644
> > > --- a/kernel/bpf/syscall.c
> > > +++ b/kernel/bpf/syscall.c
> > > @@ -419,35 +419,52 @@ void bpf_map_free_id(struct bpf_map *map, bool do_idr_lock)
> > >  #ifdef CONFIG_MEMCG_KMEM
> > >  static void bpf_map_save_memcg(struct bpf_map *map)
> > >  {
> > > -       map->memcg = get_mem_cgroup_from_mm(current->mm);
> > > +       /* Currently if a map is created by a process belonging to the root
> > > +        * memory cgroup, get_obj_cgroup_from_current() will return NULL.
> > > +        * So we have to check map->objcg for being NULL each time it's
> > > +        * being used.
> > > +        */
> > > +       map->objcg = get_obj_cgroup_from_current();
> > >  }
> > >
> > >  static void bpf_map_release_memcg(struct bpf_map *map)
> > >  {
> > > -       mem_cgroup_put(map->memcg);
> > > +       if (map->objcg)
> > > +               obj_cgroup_put(map->objcg);
> > > +}
> > > +
> > > +static struct mem_cgroup *bpf_map_get_memcg(const struct bpf_map *map) {
> > > +       if (map->objcg)
> > > +               return get_mem_cgroup_from_objcg(map->objcg);
> > > +
> > > +       return root_mem_cgroup;
> > >  }
> > >
> > >  void *bpf_map_kmalloc_node(const struct bpf_map *map, size_t size, gfp_t flags,
> > >                            int node)
> > >  {
> > > -       struct mem_cgroup *old_memcg;
> > > +       struct mem_cgroup *memcg, *old_memcg;
> > >         void *ptr;
> > >
> > > -       old_memcg = set_active_memcg(map->memcg);
> > > +       memcg = bpf_map_get_memcg(map);
> > > +       old_memcg = set_active_memcg(memcg);
> > >         ptr = kmalloc_node(size, flags | __GFP_ACCOUNT, node);
> > >         set_active_memcg(old_memcg);
> > > +       mem_cgroup_put(memcg);
> >
> > Here we might css_put root_mem_cgroup.
> > Should we css_get it when returning or
> > it's marked as CSS_NO_REF ?
> > But mem_cgroup_alloc() doesn't seem to be doing that marking.
> > I'm lost at that code.
>
> CSS_NO_REF is set for root_mem_cgroup in cgroup_init_subsys().

Ahh. I see that
css = ss->css_alloc(NULL); css->flags |= CSS_NO_REF; now.
Thanks.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH bpf-next v2] bpf: reparent bpf maps on memcg offlining
  2022-07-11 16:28 [PATCH bpf-next v2] bpf: reparent bpf maps on memcg offlining Roman Gushchin
  2022-07-12 21:48 ` Alexei Starovoitov
@ 2022-07-12 22:50 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 6+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-07-12 22:50 UTC (permalink / raw)
  To: Roman Gushchin; +Cc: bpf, shakeelb, ast, linux-kernel

Hello:

This patch was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Mon, 11 Jul 2022 09:28:27 -0700 you wrote:
> The memory consumed by a mpf map is always accounted to the memory
> cgroup of the process which created the map. The map can outlive
> the memory cgroup if it's used by processes in other cgroups or
> is pinned on bpffs. In this case the map pins the original cgroup
> in the dying state.
> 
> For other types of objects (slab objects, non-slab kernel allocations,
> percpu objects and recently LRU pages) there is a reparenting process
> implemented: on cgroup offlining charged objects are getting
> reassigned to the parent cgroup. Because all charges and statistics
> are fully recursive it's a fairly cheap operation.
> 
> [...]

Here is the summary with links:
  - [bpf-next,v2] bpf: reparent bpf maps on memcg offlining
    https://git.kernel.org/bpf/bpf-next/c/cbddef2759b6

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH bpf-next v2] bpf: reparent bpf maps on memcg offlining
  2022-07-12 22:15     ` Alexei Starovoitov
@ 2022-07-13  2:13       ` Roman Gushchin
  0 siblings, 0 replies; 6+ messages in thread
From: Roman Gushchin @ 2022-07-13  2:13 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: Shakeel Butt, bpf, Alexei Starovoitov, LKML

On Tue, Jul 12, 2022 at 03:15:27PM -0700, Alexei Starovoitov wrote:
> On Tue, Jul 12, 2022 at 3:11 PM Shakeel Butt <shakeelb@google.com> wrote:
> >
> > On Tue, Jul 12, 2022 at 2:49 PM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > >
> > > On Mon, Jul 11, 2022 at 9:28 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > >
> > > > The memory consumed by a mpf map is always accounted to the memory
> > > > cgroup of the process which created the map. The map can outlive
> > > > the memory cgroup if it's used by processes in other cgroups or
> > > > is pinned on bpffs. In this case the map pins the original cgroup
> > > > in the dying state.
> > > >
> > > > For other types of objects (slab objects, non-slab kernel allocations,
> > > > percpu objects and recently LRU pages) there is a reparenting process
> > > > implemented: on cgroup offlining charged objects are getting
> > > > reassigned to the parent cgroup. Because all charges and statistics
> > > > are fully recursive it's a fairly cheap operation.
> > > >
> > > > For efficiency and consistency with other types of objects, let's do
> > > > the same for bpf maps. Fortunately thanks to the objcg API, the
> > > > required changes are minimal.
> > > >
> > > > Please, note that individual allocations (slabs, percpu and large
> > > > kmallocs) already have the reparenting mechanism. This commit adds
> > > > it to the saved map->memcg pointer by replacing it to map->objcg.
> > > > Because dying cgroups are not visible for a user and all charges are
> > > > recursive, this commit doesn't bring any behavior changes for a user.
> > > >
> > > > v2:
> > > >   added a missing const qualifier
> > > >
> > > > Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> > > > Reviewed-by: Shakeel Butt <shakeelb@google.com>
> > > > ---
> > > >  include/linux/bpf.h  |  2 +-
> > > >  kernel/bpf/syscall.c | 35 +++++++++++++++++++++++++++--------
> > > >  2 files changed, 28 insertions(+), 9 deletions(-)
> > > >
> > > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > > > index 2b21f2a3452f..85a4db3e0536 100644
> > > > --- a/include/linux/bpf.h
> > > > +++ b/include/linux/bpf.h
> > > > @@ -221,7 +221,7 @@ struct bpf_map {
> > > >         u32 btf_vmlinux_value_type_id;
> > > >         struct btf *btf;
> > > >  #ifdef CONFIG_MEMCG_KMEM
> > > > -       struct mem_cgroup *memcg;
> > > > +       struct obj_cgroup *objcg;
> > > >  #endif
> > > >         char name[BPF_OBJ_NAME_LEN];
> > > >         struct bpf_map_off_arr *off_arr;
> > > > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> > > > index ab688d85b2c6..ef60dbc21b17 100644
> > > > --- a/kernel/bpf/syscall.c
> > > > +++ b/kernel/bpf/syscall.c
> > > > @@ -419,35 +419,52 @@ void bpf_map_free_id(struct bpf_map *map, bool do_idr_lock)
> > > >  #ifdef CONFIG_MEMCG_KMEM
> > > >  static void bpf_map_save_memcg(struct bpf_map *map)
> > > >  {
> > > > -       map->memcg = get_mem_cgroup_from_mm(current->mm);
> > > > +       /* Currently if a map is created by a process belonging to the root
> > > > +        * memory cgroup, get_obj_cgroup_from_current() will return NULL.
> > > > +        * So we have to check map->objcg for being NULL each time it's
> > > > +        * being used.
> > > > +        */
> > > > +       map->objcg = get_obj_cgroup_from_current();
> > > >  }
> > > >
> > > >  static void bpf_map_release_memcg(struct bpf_map *map)
> > > >  {
> > > > -       mem_cgroup_put(map->memcg);
> > > > +       if (map->objcg)
> > > > +               obj_cgroup_put(map->objcg);
> > > > +}
> > > > +
> > > > +static struct mem_cgroup *bpf_map_get_memcg(const struct bpf_map *map) {
> > > > +       if (map->objcg)
> > > > +               return get_mem_cgroup_from_objcg(map->objcg);
> > > > +
> > > > +       return root_mem_cgroup;
> > > >  }
> > > >
> > > >  void *bpf_map_kmalloc_node(const struct bpf_map *map, size_t size, gfp_t flags,
> > > >                            int node)
> > > >  {
> > > > -       struct mem_cgroup *old_memcg;
> > > > +       struct mem_cgroup *memcg, *old_memcg;
> > > >         void *ptr;
> > > >
> > > > -       old_memcg = set_active_memcg(map->memcg);
> > > > +       memcg = bpf_map_get_memcg(map);
> > > > +       old_memcg = set_active_memcg(memcg);
> > > >         ptr = kmalloc_node(size, flags | __GFP_ACCOUNT, node);
> > > >         set_active_memcg(old_memcg);
> > > > +       mem_cgroup_put(memcg);
> > >
> > > Here we might css_put root_mem_cgroup.
> > > Should we css_get it when returning or
> > > it's marked as CSS_NO_REF ?
> > > But mem_cgroup_alloc() doesn't seem to be doing that marking.
> > > I'm lost at that code.
> >
> > CSS_NO_REF is set for root_mem_cgroup in cgroup_init_subsys().

Yeah, the root cgroups can't be deleted so we save on the refcounting.
> 
> Ahh. I see that
> css = ss->css_alloc(NULL); css->flags |= CSS_NO_REF; now.
> Thanks.

Thanks for applying the patch!

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-07-13  2:15 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-11 16:28 [PATCH bpf-next v2] bpf: reparent bpf maps on memcg offlining Roman Gushchin
2022-07-12 21:48 ` Alexei Starovoitov
2022-07-12 22:11   ` Shakeel Butt
2022-07-12 22:15     ` Alexei Starovoitov
2022-07-13  2:13       ` Roman Gushchin
2022-07-12 22:50 ` patchwork-bot+netdevbpf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.