All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-02  3:35 Roman Gushchin
  2022-07-02  5:50   ` Shakeel Butt
  2022-07-04 15:12   ` Michal Hocko
  0 siblings, 2 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-02  3:35 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Roman Gushchin, Yafang Shao, Johannes Weiner, Michal Hocko,
	Shakeel Butt, Muchun Song, cgroups, linux-mm, bpf

Yafang Shao reported an issue related to the accounting of bpf
memory: if a bpf map is charged indirectly for memory consumed
from an interrupt context and allocations are enforced, MEMCG_MAX
events are not raised.

It's not/less of an issue in a generic case because consequent
allocations from a process context will trigger the reclaim and
MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
memory cgroup, so it might never happen. So the cgroup can
significantly exceed the memory.max limit without even triggering
MEMCG_MAX events.

Fix this by making sure that we never enforce allocations without
raising a MEMCG_MAX event.

Reported-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: cgroups@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: bpf@vger.kernel.org
---
 mm/memcontrol.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 655c09393ad5..eb383695659a 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2577,6 +2577,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
 	bool passed_oom = false;
 	bool may_swap = true;
 	bool drained = false;
+	bool raised_max_event = false;
 	unsigned long pflags;
 
 retry:
@@ -2616,6 +2617,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
 		goto nomem;
 
 	memcg_memory_event(mem_over_limit, MEMCG_MAX);
+	raised_max_event = true;
 
 	psi_memstall_enter(&pflags);
 	nr_reclaimed = try_to_free_mem_cgroup_pages(mem_over_limit, nr_pages,
@@ -2682,6 +2684,13 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
 	if (!(gfp_mask & (__GFP_NOFAIL | __GFP_HIGH)))
 		return -ENOMEM;
 force:
+	/*
+	 * If the allocation has to be enforced, don't forget to raise
+	 * a MEMCG_MAX event.
+	 */
+	if (!raised_max_event)
+		memcg_memory_event(mem_over_limit, MEMCG_MAX);
+
 	/*
 	 * The allocation either can't fail or will lead to more memory
 	 * being freed very soon.  Allow memory usage go over the limit
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-02  5:50   ` Shakeel Butt
  0 siblings, 0 replies; 37+ messages in thread
From: Shakeel Butt @ 2022-07-02  5:50 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Yafang Shao, Johannes Weiner, Michal Hocko,
	Muchun Song, Cgroups, Linux MM, bpf

On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> Yafang Shao reported an issue related to the accounting of bpf
> memory: if a bpf map is charged indirectly for memory consumed
> from an interrupt context and allocations are enforced, MEMCG_MAX
> events are not raised.
>
> It's not/less of an issue in a generic case because consequent
> allocations from a process context will trigger the reclaim and
> MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> memory cgroup, so it might never happen.

The patch looks good but the above sentence is confusing. What might
never happen? Reclaim or MAX event on dying memcg?

> So the cgroup can
> significantly exceed the memory.max limit without even triggering
> MEMCG_MAX events.
>
> Fix this by making sure that we never enforce allocations without
> raising a MEMCG_MAX event.
>
> Reported-by: Yafang Shao <laoar.shao@gmail.com>
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Muchun Song <songmuchun@bytedance.com>
> Cc: cgroups@vger.kernel.org
> Cc: linux-mm@kvack.org
> Cc: bpf@vger.kernel.org
> ---
>  mm/memcontrol.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 655c09393ad5..eb383695659a 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2577,6 +2577,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
>         bool passed_oom = false;
>         bool may_swap = true;
>         bool drained = false;
> +       bool raised_max_event = false;
>         unsigned long pflags;
>
>  retry:
> @@ -2616,6 +2617,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
>                 goto nomem;
>
>         memcg_memory_event(mem_over_limit, MEMCG_MAX);
> +       raised_max_event = true;
>
>         psi_memstall_enter(&pflags);
>         nr_reclaimed = try_to_free_mem_cgroup_pages(mem_over_limit, nr_pages,
> @@ -2682,6 +2684,13 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
>         if (!(gfp_mask & (__GFP_NOFAIL | __GFP_HIGH)))
>                 return -ENOMEM;
>  force:
> +       /*
> +        * If the allocation has to be enforced, don't forget to raise
> +        * a MEMCG_MAX event.
> +        */
> +       if (!raised_max_event)
> +               memcg_memory_event(mem_over_limit, MEMCG_MAX);
> +
>         /*
>          * The allocation either can't fail or will lead to more memory
>          * being freed very soon.  Allow memory usage go over the limit
> --
> 2.36.1
>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-02  5:50   ` Shakeel Butt
  0 siblings, 0 replies; 37+ messages in thread
From: Shakeel Butt @ 2022-07-02  5:50 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Yafang Shao, Johannes Weiner, Michal Hocko,
	Muchun Song, Cgroups, Linux MM, bpf

On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
>
> Yafang Shao reported an issue related to the accounting of bpf
> memory: if a bpf map is charged indirectly for memory consumed
> from an interrupt context and allocations are enforced, MEMCG_MAX
> events are not raised.
>
> It's not/less of an issue in a generic case because consequent
> allocations from a process context will trigger the reclaim and
> MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> memory cgroup, so it might never happen.

The patch looks good but the above sentence is confusing. What might
never happen? Reclaim or MAX event on dying memcg?

> So the cgroup can
> significantly exceed the memory.max limit without even triggering
> MEMCG_MAX events.
>
> Fix this by making sure that we never enforce allocations without
> raising a MEMCG_MAX event.
>
> Reported-by: Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Signed-off-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>
> Cc: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> Cc: Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Cc: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org
> Cc: bpf-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> ---
>  mm/memcontrol.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 655c09393ad5..eb383695659a 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2577,6 +2577,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
>         bool passed_oom = false;
>         bool may_swap = true;
>         bool drained = false;
> +       bool raised_max_event = false;
>         unsigned long pflags;
>
>  retry:
> @@ -2616,6 +2617,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
>                 goto nomem;
>
>         memcg_memory_event(mem_over_limit, MEMCG_MAX);
> +       raised_max_event = true;
>
>         psi_memstall_enter(&pflags);
>         nr_reclaimed = try_to_free_mem_cgroup_pages(mem_over_limit, nr_pages,
> @@ -2682,6 +2684,13 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
>         if (!(gfp_mask & (__GFP_NOFAIL | __GFP_HIGH)))
>                 return -ENOMEM;
>  force:
> +       /*
> +        * If the allocation has to be enforced, don't forget to raise
> +        * a MEMCG_MAX event.
> +        */
> +       if (!raised_max_event)
> +               memcg_memory_event(mem_over_limit, MEMCG_MAX);
> +
>         /*
>          * The allocation either can't fail or will lead to more memory
>          * being freed very soon.  Allow memory usage go over the limit
> --
> 2.36.1
>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-02 15:39     ` Roman Gushchin
  0 siblings, 0 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-02 15:39 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Andrew Morton, Yafang Shao, Johannes Weiner, Michal Hocko,
	Muchun Song, Cgroups, Linux MM, bpf

On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> >
> > Yafang Shao reported an issue related to the accounting of bpf
> > memory: if a bpf map is charged indirectly for memory consumed
> > from an interrupt context and allocations are enforced, MEMCG_MAX
> > events are not raised.
> >
> > It's not/less of an issue in a generic case because consequent
> > allocations from a process context will trigger the reclaim and
> > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > memory cgroup, so it might never happen.
> 
> The patch looks good but the above sentence is confusing. What might
> never happen? Reclaim or MAX event on dying memcg?

Direct reclaim and MAX events. I agree it might be not clear without
looking into the code. How about something like this?

"It's not/less of an issue in a generic case because consequent
allocations from a process context will trigger the direct reclaim
and MEMCG_MAX events will be raised. However a bpf map can belong
to a dying/abandoned memory cgroup, so there will be no allocations
from a process context and no MEMCG_MAX events will be triggered."

Thanks!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-02 15:39     ` Roman Gushchin
  0 siblings, 0 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-02 15:39 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Andrew Morton, Yafang Shao, Johannes Weiner, Michal Hocko,
	Muchun Song, Cgroups, Linux MM, bpf

On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> >
> > Yafang Shao reported an issue related to the accounting of bpf
> > memory: if a bpf map is charged indirectly for memory consumed
> > from an interrupt context and allocations are enforced, MEMCG_MAX
> > events are not raised.
> >
> > It's not/less of an issue in a generic case because consequent
> > allocations from a process context will trigger the reclaim and
> > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > memory cgroup, so it might never happen.
> 
> The patch looks good but the above sentence is confusing. What might
> never happen? Reclaim or MAX event on dying memcg?

Direct reclaim and MAX events. I agree it might be not clear without
looking into the code. How about something like this?

"It's not/less of an issue in a generic case because consequent
allocations from a process context will trigger the direct reclaim
and MEMCG_MAX events will be raised. However a bpf map can belong
to a dying/abandoned memory cgroup, so there will be no allocations
from a process context and no MEMCG_MAX events will be triggered."

Thanks!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
  2022-07-02 15:39     ` Roman Gushchin
@ 2022-07-03  5:36       ` Shakeel Butt
  -1 siblings, 0 replies; 37+ messages in thread
From: Shakeel Butt @ 2022-07-03  5:36 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Yafang Shao, Johannes Weiner, Michal Hocko,
	Muchun Song, Cgroups, Linux MM, bpf

On Sat, Jul 2, 2022 at 8:39 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > >
> > > Yafang Shao reported an issue related to the accounting of bpf
> > > memory: if a bpf map is charged indirectly for memory consumed
> > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > events are not raised.
> > >
> > > It's not/less of an issue in a generic case because consequent
> > > allocations from a process context will trigger the reclaim and
> > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > memory cgroup, so it might never happen.
> >
> > The patch looks good but the above sentence is confusing. What might
> > never happen? Reclaim or MAX event on dying memcg?
>
> Direct reclaim and MAX events. I agree it might be not clear without
> looking into the code. How about something like this?
>
> "It's not/less of an issue in a generic case because consequent
> allocations from a process context will trigger the direct reclaim
> and MEMCG_MAX events will be raised. However a bpf map can belong
> to a dying/abandoned memory cgroup, so there will be no allocations
> from a process context and no MEMCG_MAX events will be triggered."
>

SGTM and you can add:

Acked-by: Shakeel Butt <shakeelb@google.com>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-03  5:36       ` Shakeel Butt
  0 siblings, 0 replies; 37+ messages in thread
From: Shakeel Butt @ 2022-07-03  5:36 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Yafang Shao, Johannes Weiner, Michal Hocko,
	Muchun Song, Cgroups, Linux MM, bpf

On Sat, Jul 2, 2022 at 8:39 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
>
> On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > >
> > > Yafang Shao reported an issue related to the accounting of bpf
> > > memory: if a bpf map is charged indirectly for memory consumed
> > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > events are not raised.
> > >
> > > It's not/less of an issue in a generic case because consequent
> > > allocations from a process context will trigger the reclaim and
> > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > memory cgroup, so it might never happen.
> >
> > The patch looks good but the above sentence is confusing. What might
> > never happen? Reclaim or MAX event on dying memcg?
>
> Direct reclaim and MAX events. I agree it might be not clear without
> looking into the code. How about something like this?
>
> "It's not/less of an issue in a generic case because consequent
> allocations from a process context will trigger the direct reclaim
> and MEMCG_MAX events will be raised. However a bpf map can belong
> to a dying/abandoned memory cgroup, so there will be no allocations
> from a process context and no MEMCG_MAX events will be triggered."
>

SGTM and you can add:

Acked-by: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-03 22:50         ` Roman Gushchin
  0 siblings, 0 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-03 22:50 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Andrew Morton, Yafang Shao, Johannes Weiner, Michal Hocko,
	Muchun Song, Cgroups, Linux MM, bpf

On Sat, Jul 02, 2022 at 10:36:28PM -0700, Shakeel Butt wrote:
> On Sat, Jul 2, 2022 at 8:39 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> >
> > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > >
> > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > events are not raised.
> > > >
> > > > It's not/less of an issue in a generic case because consequent
> > > > allocations from a process context will trigger the reclaim and
> > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > memory cgroup, so it might never happen.
> > >
> > > The patch looks good but the above sentence is confusing. What might
> > > never happen? Reclaim or MAX event on dying memcg?
> >
> > Direct reclaim and MAX events. I agree it might be not clear without
> > looking into the code. How about something like this?
> >
> > "It's not/less of an issue in a generic case because consequent
> > allocations from a process context will trigger the direct reclaim
> > and MEMCG_MAX events will be raised. However a bpf map can belong
> > to a dying/abandoned memory cgroup, so there will be no allocations
> > from a process context and no MEMCG_MAX events will be triggered."
> >
> 
> SGTM and you can add:
> 
> Acked-by: Shakeel Butt <shakeelb@google.com>

Thank you!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-03 22:50         ` Roman Gushchin
  0 siblings, 0 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-03 22:50 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Andrew Morton, Yafang Shao, Johannes Weiner, Michal Hocko,
	Muchun Song, Cgroups, Linux MM, bpf

On Sat, Jul 02, 2022 at 10:36:28PM -0700, Shakeel Butt wrote:
> On Sat, Jul 2, 2022 at 8:39 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> >
> > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > >
> > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > events are not raised.
> > > >
> > > > It's not/less of an issue in a generic case because consequent
> > > > allocations from a process context will trigger the reclaim and
> > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > memory cgroup, so it might never happen.
> > >
> > > The patch looks good but the above sentence is confusing. What might
> > > never happen? Reclaim or MAX event on dying memcg?
> >
> > Direct reclaim and MAX events. I agree it might be not clear without
> > looking into the code. How about something like this?
> >
> > "It's not/less of an issue in a generic case because consequent
> > allocations from a process context will trigger the direct reclaim
> > and MEMCG_MAX events will be raised. However a bpf map can belong
> > to a dying/abandoned memory cgroup, so there will be no allocations
> > from a process context and no MEMCG_MAX events will be triggered."
> >
> 
> SGTM and you can add:
> 
> Acked-by: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

Thank you!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
  2022-07-02 15:39     ` Roman Gushchin
  (?)
  (?)
@ 2022-07-04 15:07     ` Michal Hocko
  2022-07-04 15:30         ` Michal Hocko
  2022-07-05 20:49       ` Roman Gushchin
  -1 siblings, 2 replies; 37+ messages in thread
From: Michal Hocko @ 2022-07-04 15:07 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Shakeel Butt, Andrew Morton, Yafang Shao, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > >
> > > Yafang Shao reported an issue related to the accounting of bpf
> > > memory: if a bpf map is charged indirectly for memory consumed
> > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > events are not raised.
> > >
> > > It's not/less of an issue in a generic case because consequent
> > > allocations from a process context will trigger the reclaim and
> > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > memory cgroup, so it might never happen.
> > 
> > The patch looks good but the above sentence is confusing. What might
> > never happen? Reclaim or MAX event on dying memcg?
> 
> Direct reclaim and MAX events. I agree it might be not clear without
> looking into the code. How about something like this?
> 
> "It's not/less of an issue in a generic case because consequent
> allocations from a process context will trigger the direct reclaim
> and MEMCG_MAX events will be raised. However a bpf map can belong
> to a dying/abandoned memory cgroup, so there will be no allocations
> from a process context and no MEMCG_MAX events will be triggered."

Could you expand little bit more on the situation? Can those charges to
offline memcg happen indefinetely? How can it ever go away then? Also is
this something that we actually want to encourage?

In other words shouldn't those remote charges be redirected when the
target memcg is offline?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-04 15:12   ` Michal Hocko
  0 siblings, 0 replies; 37+ messages in thread
From: Michal Hocko @ 2022-07-04 15:12 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Yafang Shao, Johannes Weiner, Shakeel Butt,
	Muchun Song, cgroups, linux-mm, bpf

On Fri 01-07-22 20:35:21, Roman Gushchin wrote:
> Yafang Shao reported an issue related to the accounting of bpf
> memory: if a bpf map is charged indirectly for memory consumed
> from an interrupt context and allocations are enforced, MEMCG_MAX
> events are not raised.

So I guess this will be a GFP_ATOMIC request failing due to the hard
limit, right? I think it would be easier to understand if the specific
allocation request type was mentioned.

> It's not/less of an issue in a generic case because consequent
> allocations from a process context will trigger the reclaim and
> MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> memory cgroup, so it might never happen. So the cgroup can
> significantly exceed the memory.max limit without even triggering
> MEMCG_MAX events.

More on that in other reply.

> Fix this by making sure that we never enforce allocations without
> raising a MEMCG_MAX event.
> 
> Reported-by: Yafang Shao <laoar.shao@gmail.com>
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Muchun Song <songmuchun@bytedance.com>
> Cc: cgroups@vger.kernel.org
> Cc: linux-mm@kvack.org
> Cc: bpf@vger.kernel.org

The patch makes sense to me though even without the weird charge to a
dead memcg aspect. It is true that a very calm memcg can trigger the
even much later after a GFP_ATOMIC charge (or __GFP_HIGH in general)
fails.

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/memcontrol.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 655c09393ad5..eb383695659a 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2577,6 +2577,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
>  	bool passed_oom = false;
>  	bool may_swap = true;
>  	bool drained = false;
> +	bool raised_max_event = false;
>  	unsigned long pflags;
>  
>  retry:
> @@ -2616,6 +2617,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
>  		goto nomem;
>  
>  	memcg_memory_event(mem_over_limit, MEMCG_MAX);
> +	raised_max_event = true;
>  
>  	psi_memstall_enter(&pflags);
>  	nr_reclaimed = try_to_free_mem_cgroup_pages(mem_over_limit, nr_pages,
> @@ -2682,6 +2684,13 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
>  	if (!(gfp_mask & (__GFP_NOFAIL | __GFP_HIGH)))
>  		return -ENOMEM;
>  force:
> +	/*
> +	 * If the allocation has to be enforced, don't forget to raise
> +	 * a MEMCG_MAX event.
> +	 */
> +	if (!raised_max_event)
> +		memcg_memory_event(mem_over_limit, MEMCG_MAX);
> +
>  	/*
>  	 * The allocation either can't fail or will lead to more memory
>  	 * being freed very soon.  Allow memory usage go over the limit
> -- 
> 2.36.1

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-04 15:12   ` Michal Hocko
  0 siblings, 0 replies; 37+ messages in thread
From: Michal Hocko @ 2022-07-04 15:12 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Yafang Shao, Johannes Weiner, Shakeel Butt,
	Muchun Song, cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, bpf-u79uwXL29TY76Z2rM5mHXA

On Fri 01-07-22 20:35:21, Roman Gushchin wrote:
> Yafang Shao reported an issue related to the accounting of bpf
> memory: if a bpf map is charged indirectly for memory consumed
> from an interrupt context and allocations are enforced, MEMCG_MAX
> events are not raised.

So I guess this will be a GFP_ATOMIC request failing due to the hard
limit, right? I think it would be easier to understand if the specific
allocation request type was mentioned.

> It's not/less of an issue in a generic case because consequent
> allocations from a process context will trigger the reclaim and
> MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> memory cgroup, so it might never happen. So the cgroup can
> significantly exceed the memory.max limit without even triggering
> MEMCG_MAX events.

More on that in other reply.

> Fix this by making sure that we never enforce allocations without
> raising a MEMCG_MAX event.
> 
> Reported-by: Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Signed-off-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>
> Cc: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> Cc: Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Cc: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org
> Cc: bpf-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

The patch makes sense to me though even without the weird charge to a
dead memcg aspect. It is true that a very calm memcg can trigger the
even much later after a GFP_ATOMIC charge (or __GFP_HIGH in general)
fails.

Acked-by: Michal Hocko <mhocko-IBi9RG/b67k@public.gmane.org>

> ---
>  mm/memcontrol.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 655c09393ad5..eb383695659a 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2577,6 +2577,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
>  	bool passed_oom = false;
>  	bool may_swap = true;
>  	bool drained = false;
> +	bool raised_max_event = false;
>  	unsigned long pflags;
>  
>  retry:
> @@ -2616,6 +2617,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
>  		goto nomem;
>  
>  	memcg_memory_event(mem_over_limit, MEMCG_MAX);
> +	raised_max_event = true;
>  
>  	psi_memstall_enter(&pflags);
>  	nr_reclaimed = try_to_free_mem_cgroup_pages(mem_over_limit, nr_pages,
> @@ -2682,6 +2684,13 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
>  	if (!(gfp_mask & (__GFP_NOFAIL | __GFP_HIGH)))
>  		return -ENOMEM;
>  force:
> +	/*
> +	 * If the allocation has to be enforced, don't forget to raise
> +	 * a MEMCG_MAX event.
> +	 */
> +	if (!raised_max_event)
> +		memcg_memory_event(mem_over_limit, MEMCG_MAX);
> +
>  	/*
>  	 * The allocation either can't fail or will lead to more memory
>  	 * being freed very soon.  Allow memory usage go over the limit
> -- 
> 2.36.1

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-04 15:30         ` Michal Hocko
  0 siblings, 0 replies; 37+ messages in thread
From: Michal Hocko @ 2022-07-04 15:30 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Shakeel Butt, Andrew Morton, Yafang Shao, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Mon 04-07-22 17:07:32, Michal Hocko wrote:
> On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > >
> > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > events are not raised.
> > > >
> > > > It's not/less of an issue in a generic case because consequent
> > > > allocations from a process context will trigger the reclaim and
> > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > memory cgroup, so it might never happen.
> > > 
> > > The patch looks good but the above sentence is confusing. What might
> > > never happen? Reclaim or MAX event on dying memcg?
> > 
> > Direct reclaim and MAX events. I agree it might be not clear without
> > looking into the code. How about something like this?
> > 
> > "It's not/less of an issue in a generic case because consequent
> > allocations from a process context will trigger the direct reclaim
> > and MEMCG_MAX events will be raised. However a bpf map can belong
> > to a dying/abandoned memory cgroup, so there will be no allocations
> > from a process context and no MEMCG_MAX events will be triggered."
> 
> Could you expand little bit more on the situation? Can those charges to
> offline memcg happen indefinetely? How can it ever go away then? Also is
> this something that we actually want to encourage?

One more question. Mostly out of curiosity. How is userspace actually
acting on those events? Are watchers still active on those dead memcgs?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-04 15:30         ` Michal Hocko
  0 siblings, 0 replies; 37+ messages in thread
From: Michal Hocko @ 2022-07-04 15:30 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Shakeel Butt, Andrew Morton, Yafang Shao, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Mon 04-07-22 17:07:32, Michal Hocko wrote:
> On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > >
> > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > events are not raised.
> > > >
> > > > It's not/less of an issue in a generic case because consequent
> > > > allocations from a process context will trigger the reclaim and
> > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > memory cgroup, so it might never happen.
> > > 
> > > The patch looks good but the above sentence is confusing. What might
> > > never happen? Reclaim or MAX event on dying memcg?
> > 
> > Direct reclaim and MAX events. I agree it might be not clear without
> > looking into the code. How about something like this?
> > 
> > "It's not/less of an issue in a generic case because consequent
> > allocations from a process context will trigger the direct reclaim
> > and MEMCG_MAX events will be raised. However a bpf map can belong
> > to a dying/abandoned memory cgroup, so there will be no allocations
> > from a process context and no MEMCG_MAX events will be triggered."
> 
> Could you expand little bit more on the situation? Can those charges to
> offline memcg happen indefinetely? How can it ever go away then? Also is
> this something that we actually want to encourage?

One more question. Mostly out of curiosity. How is userspace actually
acting on those events? Are watchers still active on those dead memcgs?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
  2022-07-04 15:07     ` Michal Hocko
  2022-07-04 15:30         ` Michal Hocko
@ 2022-07-05 20:49       ` Roman Gushchin
  2022-07-06  2:46         ` Yafang Shao
  1 sibling, 1 reply; 37+ messages in thread
From: Roman Gushchin @ 2022-07-05 20:49 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Shakeel Butt, Andrew Morton, Yafang Shao, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Mon, Jul 04, 2022 at 05:07:30PM +0200, Michal Hocko wrote:
> On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > >
> > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > events are not raised.
> > > >
> > > > It's not/less of an issue in a generic case because consequent
> > > > allocations from a process context will trigger the reclaim and
> > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > memory cgroup, so it might never happen.
> > > 
> > > The patch looks good but the above sentence is confusing. What might
> > > never happen? Reclaim or MAX event on dying memcg?
> > 
> > Direct reclaim and MAX events. I agree it might be not clear without
> > looking into the code. How about something like this?
> > 
> > "It's not/less of an issue in a generic case because consequent
> > allocations from a process context will trigger the direct reclaim
> > and MEMCG_MAX events will be raised. However a bpf map can belong
> > to a dying/abandoned memory cgroup, so there will be no allocations
> > from a process context and no MEMCG_MAX events will be triggered."
> 
> Could you expand little bit more on the situation? Can those charges to
> offline memcg happen indefinetely?

Yes.

> How can it ever go away then?

Bpf map should be deleted by a user first.

> Also is this something that we actually want to encourage?

Not really. We can implement reparenting (probably objcg-based), I think it's
a good idea in general. I can take a look, but can't promise it will be fast.

In thory we can't forbid deleting cgroups with associated bpf maps, but I don't
thinks it's a good idea.

> In other words shouldn't those remote charges be redirected when the
> target memcg is offline?

Reparenting is the best answer I have.

Thanks!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-05 20:51           ` Roman Gushchin
  0 siblings, 0 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-05 20:51 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Shakeel Butt, Andrew Morton, Yafang Shao, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Mon, Jul 04, 2022 at 05:30:25PM +0200, Michal Hocko wrote:
> On Mon 04-07-22 17:07:32, Michal Hocko wrote:
> > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > >
> > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > events are not raised.
> > > > >
> > > > > It's not/less of an issue in a generic case because consequent
> > > > > allocations from a process context will trigger the reclaim and
> > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > memory cgroup, so it might never happen.
> > > > 
> > > > The patch looks good but the above sentence is confusing. What might
> > > > never happen? Reclaim or MAX event on dying memcg?
> > > 
> > > Direct reclaim and MAX events. I agree it might be not clear without
> > > looking into the code. How about something like this?
> > > 
> > > "It's not/less of an issue in a generic case because consequent
> > > allocations from a process context will trigger the direct reclaim
> > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > from a process context and no MEMCG_MAX events will be triggered."
> > 
> > Could you expand little bit more on the situation? Can those charges to
> > offline memcg happen indefinetely? How can it ever go away then? Also is
> > this something that we actually want to encourage?
> 
> One more question. Mostly out of curiosity. How is userspace actually
> acting on those events? Are watchers still active on those dead memcgs?

Idk, the whole problem was reported by Yafang, so he probably has a better
answer. But in general events are recursive and the cgroup doesn't have
to be dying, it can be simple abandoned.

Thanks!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-05 20:51           ` Roman Gushchin
  0 siblings, 0 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-05 20:51 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Shakeel Butt, Andrew Morton, Yafang Shao, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Mon, Jul 04, 2022 at 05:30:25PM +0200, Michal Hocko wrote:
> On Mon 04-07-22 17:07:32, Michal Hocko wrote:
> > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > > >
> > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > events are not raised.
> > > > >
> > > > > It's not/less of an issue in a generic case because consequent
> > > > > allocations from a process context will trigger the reclaim and
> > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > memory cgroup, so it might never happen.
> > > > 
> > > > The patch looks good but the above sentence is confusing. What might
> > > > never happen? Reclaim or MAX event on dying memcg?
> > > 
> > > Direct reclaim and MAX events. I agree it might be not clear without
> > > looking into the code. How about something like this?
> > > 
> > > "It's not/less of an issue in a generic case because consequent
> > > allocations from a process context will trigger the direct reclaim
> > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > from a process context and no MEMCG_MAX events will be triggered."
> > 
> > Could you expand little bit more on the situation? Can those charges to
> > offline memcg happen indefinetely? How can it ever go away then? Also is
> > this something that we actually want to encourage?
> 
> One more question. Mostly out of curiosity. How is userspace actually
> acting on those events? Are watchers still active on those dead memcgs?

Idk, the whole problem was reported by Yafang, so he probably has a better
answer. But in general events are recursive and the cgroup doesn't have
to be dying, it can be simple abandoned.

Thanks!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
  2022-07-04 15:12   ` Michal Hocko
  (?)
@ 2022-07-05 20:55   ` Roman Gushchin
  -1 siblings, 0 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-05 20:55 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Yafang Shao, Johannes Weiner, Shakeel Butt,
	Muchun Song, cgroups, linux-mm, bpf

On Mon, Jul 04, 2022 at 05:12:54PM +0200, Michal Hocko wrote:
> On Fri 01-07-22 20:35:21, Roman Gushchin wrote:
> > Yafang Shao reported an issue related to the accounting of bpf
> > memory: if a bpf map is charged indirectly for memory consumed
> > from an interrupt context and allocations are enforced, MEMCG_MAX
> > events are not raised.
> 
> So I guess this will be a GFP_ATOMIC request failing due to the hard
> limit, right? I think it would be easier to understand if the specific
> allocation request type was mentioned.

It all started from the discussion here:
https://www.spinics.net/lists/linux-mm/msg302319.html

Please, take a look.

> 
> > It's not/less of an issue in a generic case because consequent
> > allocations from a process context will trigger the reclaim and
> > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > memory cgroup, so it might never happen. So the cgroup can
> > significantly exceed the memory.max limit without even triggering
> > MEMCG_MAX events.
> 
> More on that in other reply.
> 
> > Fix this by making sure that we never enforce allocations without
> > raising a MEMCG_MAX event.
> > 
> > Reported-by: Yafang Shao <laoar.shao@gmail.com>
> > Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> > Cc: Johannes Weiner <hannes@cmpxchg.org>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: Shakeel Butt <shakeelb@google.com>
> > Cc: Muchun Song <songmuchun@bytedance.com>
> > Cc: cgroups@vger.kernel.org
> > Cc: linux-mm@kvack.org
> > Cc: bpf@vger.kernel.org
> 
> The patch makes sense to me though even without the weird charge to a
> dead memcg aspect. It is true that a very calm memcg can trigger the
> even much later after a GFP_ATOMIC charge (or __GFP_HIGH in general)
> fails.

Good point!

> 
> Acked-by: Michal Hocko <mhocko@suse.com>

Thanks!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
  2022-07-05 20:51           ` Roman Gushchin
@ 2022-07-06  2:40             ` Yafang Shao
  -1 siblings, 0 replies; 37+ messages in thread
From: Yafang Shao @ 2022-07-06  2:40 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 6, 2022 at 4:52 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> On Mon, Jul 04, 2022 at 05:30:25PM +0200, Michal Hocko wrote:
> > On Mon 04-07-22 17:07:32, Michal Hocko wrote:
> > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > > >
> > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > events are not raised.
> > > > > >
> > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > allocations from a process context will trigger the reclaim and
> > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > memory cgroup, so it might never happen.
> > > > >
> > > > > The patch looks good but the above sentence is confusing. What might
> > > > > never happen? Reclaim or MAX event on dying memcg?
> > > >
> > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > looking into the code. How about something like this?
> > > >
> > > > "It's not/less of an issue in a generic case because consequent
> > > > allocations from a process context will trigger the direct reclaim
> > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > from a process context and no MEMCG_MAX events will be triggered."
> > >
> > > Could you expand little bit more on the situation? Can those charges to
> > > offline memcg happen indefinetely? How can it ever go away then? Also is
> > > this something that we actually want to encourage?
> >
> > One more question. Mostly out of curiosity. How is userspace actually
> > acting on those events? Are watchers still active on those dead memcgs?
>
> Idk, the whole problem was reported by Yafang, so he probably has a better
> answer. But in general events are recursive and the cgroup doesn't have
> to be dying, it can be simple abandoned.
>

Regarding the pinned bpf programs, it can run without a user agent.
That means the cgroup may not be dead, but just not populated.
(But in our case, the cgroup will be deleted after the user agent exits.)

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-06  2:40             ` Yafang Shao
  0 siblings, 0 replies; 37+ messages in thread
From: Yafang Shao @ 2022-07-06  2:40 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 6, 2022 at 4:52 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
>
> On Mon, Jul 04, 2022 at 05:30:25PM +0200, Michal Hocko wrote:
> > On Mon 04-07-22 17:07:32, Michal Hocko wrote:
> > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > > > >
> > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > events are not raised.
> > > > > >
> > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > allocations from a process context will trigger the reclaim and
> > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > memory cgroup, so it might never happen.
> > > > >
> > > > > The patch looks good but the above sentence is confusing. What might
> > > > > never happen? Reclaim or MAX event on dying memcg?
> > > >
> > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > looking into the code. How about something like this?
> > > >
> > > > "It's not/less of an issue in a generic case because consequent
> > > > allocations from a process context will trigger the direct reclaim
> > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > from a process context and no MEMCG_MAX events will be triggered."
> > >
> > > Could you expand little bit more on the situation? Can those charges to
> > > offline memcg happen indefinetely? How can it ever go away then? Also is
> > > this something that we actually want to encourage?
> >
> > One more question. Mostly out of curiosity. How is userspace actually
> > acting on those events? Are watchers still active on those dead memcgs?
>
> Idk, the whole problem was reported by Yafang, so he probably has a better
> answer. But in general events are recursive and the cgroup doesn't have
> to be dying, it can be simple abandoned.
>

Regarding the pinned bpf programs, it can run without a user agent.
That means the cgroup may not be dead, but just not populated.
(But in our case, the cgroup will be deleted after the user agent exits.)

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
  2022-07-05 20:49       ` Roman Gushchin
@ 2022-07-06  2:46         ` Yafang Shao
  2022-07-06  3:28           ` Roman Gushchin
  0 siblings, 1 reply; 37+ messages in thread
From: Yafang Shao @ 2022-07-06  2:46 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 6, 2022 at 4:49 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> On Mon, Jul 04, 2022 at 05:07:30PM +0200, Michal Hocko wrote:
> > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > >
> > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > events are not raised.
> > > > >
> > > > > It's not/less of an issue in a generic case because consequent
> > > > > allocations from a process context will trigger the reclaim and
> > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > memory cgroup, so it might never happen.
> > > >
> > > > The patch looks good but the above sentence is confusing. What might
> > > > never happen? Reclaim or MAX event on dying memcg?
> > >
> > > Direct reclaim and MAX events. I agree it might be not clear without
> > > looking into the code. How about something like this?
> > >
> > > "It's not/less of an issue in a generic case because consequent
> > > allocations from a process context will trigger the direct reclaim
> > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > from a process context and no MEMCG_MAX events will be triggered."
> >
> > Could you expand little bit more on the situation? Can those charges to
> > offline memcg happen indefinetely?
>
> Yes.
>
> > How can it ever go away then?
>
> Bpf map should be deleted by a user first.
>

It can't apply to pinned bpf maps, because the user expects the bpf
maps to continue working after the user agent exits.

> > Also is this something that we actually want to encourage?
>
> Not really. We can implement reparenting (probably objcg-based), I think it's
> a good idea in general. I can take a look, but can't promise it will be fast.
>
> In thory we can't forbid deleting cgroups with associated bpf maps, but I don't
> thinks it's a good idea.
>

Agreed. It is not a good idea.

> > In other words shouldn't those remote charges be redirected when the
> > target memcg is offline?
>
> Reparenting is the best answer I have.
>

At the cost of increasing the complexity of deployment, that may not
be a good idea neither.

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
  2022-07-06  2:46         ` Yafang Shao
@ 2022-07-06  3:28           ` Roman Gushchin
  2022-07-06  3:42               ` Yafang Shao
  0 siblings, 1 reply; 37+ messages in thread
From: Roman Gushchin @ 2022-07-06  3:28 UTC (permalink / raw)
  To: Yafang Shao
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 06, 2022 at 10:46:48AM +0800, Yafang Shao wrote:
> On Wed, Jul 6, 2022 at 4:49 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> >
> > On Mon, Jul 04, 2022 at 05:07:30PM +0200, Michal Hocko wrote:
> > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > > >
> > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > events are not raised.
> > > > > >
> > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > allocations from a process context will trigger the reclaim and
> > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > memory cgroup, so it might never happen.
> > > > >
> > > > > The patch looks good but the above sentence is confusing. What might
> > > > > never happen? Reclaim or MAX event on dying memcg?
> > > >
> > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > looking into the code. How about something like this?
> > > >
> > > > "It's not/less of an issue in a generic case because consequent
> > > > allocations from a process context will trigger the direct reclaim
> > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > from a process context and no MEMCG_MAX events will be triggered."
> > >
> > > Could you expand little bit more on the situation? Can those charges to
> > > offline memcg happen indefinetely?
> >
> > Yes.
> >
> > > How can it ever go away then?
> >
> > Bpf map should be deleted by a user first.
> >
> 
> It can't apply to pinned bpf maps, because the user expects the bpf
> maps to continue working after the user agent exits.
> 
> > > Also is this something that we actually want to encourage?
> >
> > Not really. We can implement reparenting (probably objcg-based), I think it's
> > a good idea in general. I can take a look, but can't promise it will be fast.
> >
> > In thory we can't forbid deleting cgroups with associated bpf maps, but I don't
> > thinks it's a good idea.
> >
> 
> Agreed. It is not a good idea.
> 
> > > In other words shouldn't those remote charges be redirected when the
> > > target memcg is offline?
> >
> > Reparenting is the best answer I have.
> >
> 
> At the cost of increasing the complexity of deployment, that may not
> be a good idea neither.

What do you mean? Can you please elaborate on it?

Thanks!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
  2022-07-06  3:28           ` Roman Gushchin
@ 2022-07-06  3:42               ` Yafang Shao
  0 siblings, 0 replies; 37+ messages in thread
From: Yafang Shao @ 2022-07-06  3:42 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 6, 2022 at 11:28 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> On Wed, Jul 06, 2022 at 10:46:48AM +0800, Yafang Shao wrote:
> > On Wed, Jul 6, 2022 at 4:49 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > >
> > > On Mon, Jul 04, 2022 at 05:07:30PM +0200, Michal Hocko wrote:
> > > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > > > >
> > > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > > events are not raised.
> > > > > > >
> > > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > > allocations from a process context will trigger the reclaim and
> > > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > > memory cgroup, so it might never happen.
> > > > > >
> > > > > > The patch looks good but the above sentence is confusing. What might
> > > > > > never happen? Reclaim or MAX event on dying memcg?
> > > > >
> > > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > > looking into the code. How about something like this?
> > > > >
> > > > > "It's not/less of an issue in a generic case because consequent
> > > > > allocations from a process context will trigger the direct reclaim
> > > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > > from a process context and no MEMCG_MAX events will be triggered."
> > > >
> > > > Could you expand little bit more on the situation? Can those charges to
> > > > offline memcg happen indefinetely?
> > >
> > > Yes.
> > >
> > > > How can it ever go away then?
> > >
> > > Bpf map should be deleted by a user first.
> > >
> >
> > It can't apply to pinned bpf maps, because the user expects the bpf
> > maps to continue working after the user agent exits.
> >
> > > > Also is this something that we actually want to encourage?
> > >
> > > Not really. We can implement reparenting (probably objcg-based), I think it's
> > > a good idea in general. I can take a look, but can't promise it will be fast.
> > >
> > > In thory we can't forbid deleting cgroups with associated bpf maps, but I don't
> > > thinks it's a good idea.
> > >
> >
> > Agreed. It is not a good idea.
> >
> > > > In other words shouldn't those remote charges be redirected when the
> > > > target memcg is offline?
> > >
> > > Reparenting is the best answer I have.
> > >
> >
> > At the cost of increasing the complexity of deployment, that may not
> > be a good idea neither.
>
> What do you mean? Can you please elaborate on it?
>

                   parent memcg
                         |
                    bpf memcg   <- limit the memory size of bpf
programs
                        /           \
         bpf user agent     pinned bpf program

After bpf user agents exit, the bpf memcg will be dead, and then all
its memory will be reparented.
That is okay for preallocated bpf maps, but not okay for
non-preallocated bpf maps.
Because the bpf maps will continue to charge, but as all its memory
and objcg are reparented, so we have to limit the bpf memory size in
the parent as follows,

                   parent memcg   <-      limit the memory size of bpf
programs
                         |
                    bpf memcg
                        /           \
         bpf user agent     pinned bpf program

That means parent memcg can't be deleted and can only contain one bpf memcg.
It may work if we use systemd to manage the memcgs, but it will be a
problem if we use k8s to manage the memcgs.

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-06  3:42               ` Yafang Shao
  0 siblings, 0 replies; 37+ messages in thread
From: Yafang Shao @ 2022-07-06  3:42 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 6, 2022 at 11:28 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
>
> On Wed, Jul 06, 2022 at 10:46:48AM +0800, Yafang Shao wrote:
> > On Wed, Jul 6, 2022 at 4:49 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > >
> > > On Mon, Jul 04, 2022 at 05:07:30PM +0200, Michal Hocko wrote:
> > > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > > > > >
> > > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > > events are not raised.
> > > > > > >
> > > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > > allocations from a process context will trigger the reclaim and
> > > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > > memory cgroup, so it might never happen.
> > > > > >
> > > > > > The patch looks good but the above sentence is confusing. What might
> > > > > > never happen? Reclaim or MAX event on dying memcg?
> > > > >
> > > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > > looking into the code. How about something like this?
> > > > >
> > > > > "It's not/less of an issue in a generic case because consequent
> > > > > allocations from a process context will trigger the direct reclaim
> > > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > > from a process context and no MEMCG_MAX events will be triggered."
> > > >
> > > > Could you expand little bit more on the situation? Can those charges to
> > > > offline memcg happen indefinetely?
> > >
> > > Yes.
> > >
> > > > How can it ever go away then?
> > >
> > > Bpf map should be deleted by a user first.
> > >
> >
> > It can't apply to pinned bpf maps, because the user expects the bpf
> > maps to continue working after the user agent exits.
> >
> > > > Also is this something that we actually want to encourage?
> > >
> > > Not really. We can implement reparenting (probably objcg-based), I think it's
> > > a good idea in general. I can take a look, but can't promise it will be fast.
> > >
> > > In thory we can't forbid deleting cgroups with associated bpf maps, but I don't
> > > thinks it's a good idea.
> > >
> >
> > Agreed. It is not a good idea.
> >
> > > > In other words shouldn't those remote charges be redirected when the
> > > > target memcg is offline?
> > >
> > > Reparenting is the best answer I have.
> > >
> >
> > At the cost of increasing the complexity of deployment, that may not
> > be a good idea neither.
>
> What do you mean? Can you please elaborate on it?
>

                   parent memcg
                         |
                    bpf memcg   <- limit the memory size of bpf
programs
                        /           \
         bpf user agent     pinned bpf program

After bpf user agents exit, the bpf memcg will be dead, and then all
its memory will be reparented.
That is okay for preallocated bpf maps, but not okay for
non-preallocated bpf maps.
Because the bpf maps will continue to charge, but as all its memory
and objcg are reparented, so we have to limit the bpf memory size in
the parent as follows,

                   parent memcg   <-      limit the memory size of bpf
programs
                         |
                    bpf memcg
                        /           \
         bpf user agent     pinned bpf program

That means parent memcg can't be deleted and can only contain one bpf memcg.
It may work if we use systemd to manage the memcgs, but it will be a
problem if we use k8s to manage the memcgs.

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-06  3:56                 ` Roman Gushchin
  0 siblings, 0 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-06  3:56 UTC (permalink / raw)
  To: Yafang Shao
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 06, 2022 at 11:42:50AM +0800, Yafang Shao wrote:
> On Wed, Jul 6, 2022 at 11:28 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> >
> > On Wed, Jul 06, 2022 at 10:46:48AM +0800, Yafang Shao wrote:
> > > On Wed, Jul 6, 2022 at 4:49 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > >
> > > > On Mon, Jul 04, 2022 at 05:07:30PM +0200, Michal Hocko wrote:
> > > > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > > > > >
> > > > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > > > events are not raised.
> > > > > > > >
> > > > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > > > allocations from a process context will trigger the reclaim and
> > > > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > > > memory cgroup, so it might never happen.
> > > > > > >
> > > > > > > The patch looks good but the above sentence is confusing. What might
> > > > > > > never happen? Reclaim or MAX event on dying memcg?
> > > > > >
> > > > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > > > looking into the code. How about something like this?
> > > > > >
> > > > > > "It's not/less of an issue in a generic case because consequent
> > > > > > allocations from a process context will trigger the direct reclaim
> > > > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > > > from a process context and no MEMCG_MAX events will be triggered."
> > > > >
> > > > > Could you expand little bit more on the situation? Can those charges to
> > > > > offline memcg happen indefinetely?
> > > >
> > > > Yes.
> > > >
> > > > > How can it ever go away then?
> > > >
> > > > Bpf map should be deleted by a user first.
> > > >
> > >
> > > It can't apply to pinned bpf maps, because the user expects the bpf
> > > maps to continue working after the user agent exits.
> > >
> > > > > Also is this something that we actually want to encourage?
> > > >
> > > > Not really. We can implement reparenting (probably objcg-based), I think it's
> > > > a good idea in general. I can take a look, but can't promise it will be fast.
> > > >
> > > > In thory we can't forbid deleting cgroups with associated bpf maps, but I don't
> > > > thinks it's a good idea.
> > > >
> > >
> > > Agreed. It is not a good idea.
> > >
> > > > > In other words shouldn't those remote charges be redirected when the
> > > > > target memcg is offline?
> > > >
> > > > Reparenting is the best answer I have.
> > > >
> > >
> > > At the cost of increasing the complexity of deployment, that may not
> > > be a good idea neither.
> >
> > What do you mean? Can you please elaborate on it?
> >
> 
>                    parent memcg
>                          |
>                     bpf memcg   <- limit the memory size of bpf
> programs
>                         /           \
>          bpf user agent     pinned bpf program
> 
> After bpf user agents exit, the bpf memcg will be dead, and then all
> its memory will be reparented.
> That is okay for preallocated bpf maps, but not okay for
> non-preallocated bpf maps.
> Because the bpf maps will continue to charge, but as all its memory
> and objcg are reparented, so we have to limit the bpf memory size in
> the parent as follows,

So you're relying on the memory limit of a dying cgroup?
Sorry, but I don't think we can seriously discuss such a design.
A dying cgroup is invisible for a user, a user can't change any tunables,
they have zero visibility into any stats or charges. Why would you do this?

If you want the cgroup to be an active part of the memory management
process, don't delete it. There are exactly zero guarantees about what
happens with a memory cgroup after being deleted by a user, it's all
implementation details.

Anyway, here is the patch for reparenting bpf maps:
https://github.com/rgushchin/linux/commit/f57df8bb35770507a4624fe52216b6c14f39c50c

I gonna post it to bpf@ after some testing.

Thanks!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-06  3:56                 ` Roman Gushchin
  0 siblings, 0 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-06  3:56 UTC (permalink / raw)
  To: Yafang Shao
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 06, 2022 at 11:42:50AM +0800, Yafang Shao wrote:
> On Wed, Jul 6, 2022 at 11:28 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> >
> > On Wed, Jul 06, 2022 at 10:46:48AM +0800, Yafang Shao wrote:
> > > On Wed, Jul 6, 2022 at 4:49 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > >
> > > > On Mon, Jul 04, 2022 at 05:07:30PM +0200, Michal Hocko wrote:
> > > > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > > > > > >
> > > > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > > > events are not raised.
> > > > > > > >
> > > > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > > > allocations from a process context will trigger the reclaim and
> > > > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > > > memory cgroup, so it might never happen.
> > > > > > >
> > > > > > > The patch looks good but the above sentence is confusing. What might
> > > > > > > never happen? Reclaim or MAX event on dying memcg?
> > > > > >
> > > > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > > > looking into the code. How about something like this?
> > > > > >
> > > > > > "It's not/less of an issue in a generic case because consequent
> > > > > > allocations from a process context will trigger the direct reclaim
> > > > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > > > from a process context and no MEMCG_MAX events will be triggered."
> > > > >
> > > > > Could you expand little bit more on the situation? Can those charges to
> > > > > offline memcg happen indefinetely?
> > > >
> > > > Yes.
> > > >
> > > > > How can it ever go away then?
> > > >
> > > > Bpf map should be deleted by a user first.
> > > >
> > >
> > > It can't apply to pinned bpf maps, because the user expects the bpf
> > > maps to continue working after the user agent exits.
> > >
> > > > > Also is this something that we actually want to encourage?
> > > >
> > > > Not really. We can implement reparenting (probably objcg-based), I think it's
> > > > a good idea in general. I can take a look, but can't promise it will be fast.
> > > >
> > > > In thory we can't forbid deleting cgroups with associated bpf maps, but I don't
> > > > thinks it's a good idea.
> > > >
> > >
> > > Agreed. It is not a good idea.
> > >
> > > > > In other words shouldn't those remote charges be redirected when the
> > > > > target memcg is offline?
> > > >
> > > > Reparenting is the best answer I have.
> > > >
> > >
> > > At the cost of increasing the complexity of deployment, that may not
> > > be a good idea neither.
> >
> > What do you mean? Can you please elaborate on it?
> >
> 
>                    parent memcg
>                          |
>                     bpf memcg   <- limit the memory size of bpf
> programs
>                         /           \
>          bpf user agent     pinned bpf program
> 
> After bpf user agents exit, the bpf memcg will be dead, and then all
> its memory will be reparented.
> That is okay for preallocated bpf maps, but not okay for
> non-preallocated bpf maps.
> Because the bpf maps will continue to charge, but as all its memory
> and objcg are reparented, so we have to limit the bpf memory size in
> the parent as follows,

So you're relying on the memory limit of a dying cgroup?
Sorry, but I don't think we can seriously discuss such a design.
A dying cgroup is invisible for a user, a user can't change any tunables,
they have zero visibility into any stats or charges. Why would you do this?

If you want the cgroup to be an active part of the memory management
process, don't delete it. There are exactly zero guarantees about what
happens with a memory cgroup after being deleted by a user, it's all
implementation details.

Anyway, here is the patch for reparenting bpf maps:
https://github.com/rgushchin/linux/commit/f57df8bb35770507a4624fe52216b6c14f39c50c

I gonna post it to bpf@ after some testing.

Thanks!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
  2022-07-06  3:56                 ` Roman Gushchin
@ 2022-07-06  4:02                   ` Yafang Shao
  -1 siblings, 0 replies; 37+ messages in thread
From: Yafang Shao @ 2022-07-06  4:02 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 6, 2022 at 11:56 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> On Wed, Jul 06, 2022 at 11:42:50AM +0800, Yafang Shao wrote:
> > On Wed, Jul 6, 2022 at 11:28 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > >
> > > On Wed, Jul 06, 2022 at 10:46:48AM +0800, Yafang Shao wrote:
> > > > On Wed, Jul 6, 2022 at 4:49 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > >
> > > > > On Mon, Jul 04, 2022 at 05:07:30PM +0200, Michal Hocko wrote:
> > > > > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > > > > > >
> > > > > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > > > > events are not raised.
> > > > > > > > >
> > > > > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > > > > allocations from a process context will trigger the reclaim and
> > > > > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > > > > memory cgroup, so it might never happen.
> > > > > > > >
> > > > > > > > The patch looks good but the above sentence is confusing. What might
> > > > > > > > never happen? Reclaim or MAX event on dying memcg?
> > > > > > >
> > > > > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > > > > looking into the code. How about something like this?
> > > > > > >
> > > > > > > "It's not/less of an issue in a generic case because consequent
> > > > > > > allocations from a process context will trigger the direct reclaim
> > > > > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > > > > from a process context and no MEMCG_MAX events will be triggered."
> > > > > >
> > > > > > Could you expand little bit more on the situation? Can those charges to
> > > > > > offline memcg happen indefinetely?
> > > > >
> > > > > Yes.
> > > > >
> > > > > > How can it ever go away then?
> > > > >
> > > > > Bpf map should be deleted by a user first.
> > > > >
> > > >
> > > > It can't apply to pinned bpf maps, because the user expects the bpf
> > > > maps to continue working after the user agent exits.
> > > >
> > > > > > Also is this something that we actually want to encourage?
> > > > >
> > > > > Not really. We can implement reparenting (probably objcg-based), I think it's
> > > > > a good idea in general. I can take a look, but can't promise it will be fast.
> > > > >
> > > > > In thory we can't forbid deleting cgroups with associated bpf maps, but I don't
> > > > > thinks it's a good idea.
> > > > >
> > > >
> > > > Agreed. It is not a good idea.
> > > >
> > > > > > In other words shouldn't those remote charges be redirected when the
> > > > > > target memcg is offline?
> > > > >
> > > > > Reparenting is the best answer I have.
> > > > >
> > > >
> > > > At the cost of increasing the complexity of deployment, that may not
> > > > be a good idea neither.
> > >
> > > What do you mean? Can you please elaborate on it?
> > >
> >
> >                    parent memcg
> >                          |
> >                     bpf memcg   <- limit the memory size of bpf
> > programs
> >                         /           \
> >          bpf user agent     pinned bpf program
> >
> > After bpf user agents exit, the bpf memcg will be dead, and then all
> > its memory will be reparented.
> > That is okay for preallocated bpf maps, but not okay for
> > non-preallocated bpf maps.
> > Because the bpf maps will continue to charge, but as all its memory
> > and objcg are reparented, so we have to limit the bpf memory size in
> > the parent as follows,
>
> So you're relying on the memory limit of a dying cgroup?

No. I didn't say it.  What I said is you can't use a dying cgroup to
limit it, that's why I said that we have to use parant memcg to limit
it.

> Sorry, but I don't think we can seriously discuss such a design.
> A dying cgroup is invisible for a user, a user can't change any tunables,
> they have zero visibility into any stats or charges. Why would you do this?
>
> If you want the cgroup to be an active part of the memory management
> process, don't delete it. There are exactly zero guarantees about what
> happens with a memory cgroup after being deleted by a user, it's all
> implementation details.
>
> Anyway, here is the patch for reparenting bpf maps:
> https://github.com/rgushchin/linux/commit/f57df8bb35770507a4624fe52216b6c14f39c50c
>
> I gonna post it to bpf@ after some testing.
>

I will take a look at it.
But AFAIK the reparenting can't resolve the problem of non-preallocated maps.


-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-06  4:02                   ` Yafang Shao
  0 siblings, 0 replies; 37+ messages in thread
From: Yafang Shao @ 2022-07-06  4:02 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 6, 2022 at 11:56 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
>
> On Wed, Jul 06, 2022 at 11:42:50AM +0800, Yafang Shao wrote:
> > On Wed, Jul 6, 2022 at 11:28 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > >
> > > On Wed, Jul 06, 2022 at 10:46:48AM +0800, Yafang Shao wrote:
> > > > On Wed, Jul 6, 2022 at 4:49 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > > >
> > > > > On Mon, Jul 04, 2022 at 05:07:30PM +0200, Michal Hocko wrote:
> > > > > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > > > > > > >
> > > > > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > > > > events are not raised.
> > > > > > > > >
> > > > > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > > > > allocations from a process context will trigger the reclaim and
> > > > > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > > > > memory cgroup, so it might never happen.
> > > > > > > >
> > > > > > > > The patch looks good but the above sentence is confusing. What might
> > > > > > > > never happen? Reclaim or MAX event on dying memcg?
> > > > > > >
> > > > > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > > > > looking into the code. How about something like this?
> > > > > > >
> > > > > > > "It's not/less of an issue in a generic case because consequent
> > > > > > > allocations from a process context will trigger the direct reclaim
> > > > > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > > > > from a process context and no MEMCG_MAX events will be triggered."
> > > > > >
> > > > > > Could you expand little bit more on the situation? Can those charges to
> > > > > > offline memcg happen indefinetely?
> > > > >
> > > > > Yes.
> > > > >
> > > > > > How can it ever go away then?
> > > > >
> > > > > Bpf map should be deleted by a user first.
> > > > >
> > > >
> > > > It can't apply to pinned bpf maps, because the user expects the bpf
> > > > maps to continue working after the user agent exits.
> > > >
> > > > > > Also is this something that we actually want to encourage?
> > > > >
> > > > > Not really. We can implement reparenting (probably objcg-based), I think it's
> > > > > a good idea in general. I can take a look, but can't promise it will be fast.
> > > > >
> > > > > In thory we can't forbid deleting cgroups with associated bpf maps, but I don't
> > > > > thinks it's a good idea.
> > > > >
> > > >
> > > > Agreed. It is not a good idea.
> > > >
> > > > > > In other words shouldn't those remote charges be redirected when the
> > > > > > target memcg is offline?
> > > > >
> > > > > Reparenting is the best answer I have.
> > > > >
> > > >
> > > > At the cost of increasing the complexity of deployment, that may not
> > > > be a good idea neither.
> > >
> > > What do you mean? Can you please elaborate on it?
> > >
> >
> >                    parent memcg
> >                          |
> >                     bpf memcg   <- limit the memory size of bpf
> > programs
> >                         /           \
> >          bpf user agent     pinned bpf program
> >
> > After bpf user agents exit, the bpf memcg will be dead, and then all
> > its memory will be reparented.
> > That is okay for preallocated bpf maps, but not okay for
> > non-preallocated bpf maps.
> > Because the bpf maps will continue to charge, but as all its memory
> > and objcg are reparented, so we have to limit the bpf memory size in
> > the parent as follows,
>
> So you're relying on the memory limit of a dying cgroup?

No. I didn't say it.  What I said is you can't use a dying cgroup to
limit it, that's why I said that we have to use parant memcg to limit
it.

> Sorry, but I don't think we can seriously discuss such a design.
> A dying cgroup is invisible for a user, a user can't change any tunables,
> they have zero visibility into any stats or charges. Why would you do this?
>
> If you want the cgroup to be an active part of the memory management
> process, don't delete it. There are exactly zero guarantees about what
> happens with a memory cgroup after being deleted by a user, it's all
> implementation details.
>
> Anyway, here is the patch for reparenting bpf maps:
> https://github.com/rgushchin/linux/commit/f57df8bb35770507a4624fe52216b6c14f39c50c
>
> I gonna post it to bpf@ after some testing.
>

I will take a look at it.
But AFAIK the reparenting can't resolve the problem of non-preallocated maps.


-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-06  4:19                     ` Roman Gushchin
  0 siblings, 0 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-06  4:19 UTC (permalink / raw)
  To: Yafang Shao
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 06, 2022 at 12:02:49PM +0800, Yafang Shao wrote:
> On Wed, Jul 6, 2022 at 11:56 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> >
> > On Wed, Jul 06, 2022 at 11:42:50AM +0800, Yafang Shao wrote:
> > > On Wed, Jul 6, 2022 at 11:28 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > >
> > > > On Wed, Jul 06, 2022 at 10:46:48AM +0800, Yafang Shao wrote:
> > > > > On Wed, Jul 6, 2022 at 4:49 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > > >
> > > > > > On Mon, Jul 04, 2022 at 05:07:30PM +0200, Michal Hocko wrote:
> > > > > > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > > > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > > > > > > >
> > > > > > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > > > > > events are not raised.
> > > > > > > > > >
> > > > > > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > > > > > allocations from a process context will trigger the reclaim and
> > > > > > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > > > > > memory cgroup, so it might never happen.
> > > > > > > > >
> > > > > > > > > The patch looks good but the above sentence is confusing. What might
> > > > > > > > > never happen? Reclaim or MAX event on dying memcg?
> > > > > > > >
> > > > > > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > > > > > looking into the code. How about something like this?
> > > > > > > >
> > > > > > > > "It's not/less of an issue in a generic case because consequent
> > > > > > > > allocations from a process context will trigger the direct reclaim
> > > > > > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > > > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > > > > > from a process context and no MEMCG_MAX events will be triggered."
> > > > > > >
> > > > > > > Could you expand little bit more on the situation? Can those charges to
> > > > > > > offline memcg happen indefinetely?
> > > > > >
> > > > > > Yes.
> > > > > >
> > > > > > > How can it ever go away then?
> > > > > >
> > > > > > Bpf map should be deleted by a user first.
> > > > > >
> > > > >
> > > > > It can't apply to pinned bpf maps, because the user expects the bpf
> > > > > maps to continue working after the user agent exits.
> > > > >
> > > > > > > Also is this something that we actually want to encourage?
> > > > > >
> > > > > > Not really. We can implement reparenting (probably objcg-based), I think it's
> > > > > > a good idea in general. I can take a look, but can't promise it will be fast.
> > > > > >
> > > > > > In thory we can't forbid deleting cgroups with associated bpf maps, but I don't
> > > > > > thinks it's a good idea.
> > > > > >
> > > > >
> > > > > Agreed. It is not a good idea.
> > > > >
> > > > > > > In other words shouldn't those remote charges be redirected when the
> > > > > > > target memcg is offline?
> > > > > >
> > > > > > Reparenting is the best answer I have.
> > > > > >
> > > > >
> > > > > At the cost of increasing the complexity of deployment, that may not
> > > > > be a good idea neither.
> > > >
> > > > What do you mean? Can you please elaborate on it?
> > > >
> > >
> > >                    parent memcg
> > >                          |
> > >                     bpf memcg   <- limit the memory size of bpf
> > > programs
> > >                         /           \
> > >          bpf user agent     pinned bpf program
> > >
> > > After bpf user agents exit, the bpf memcg will be dead, and then all
> > > its memory will be reparented.
> > > That is okay for preallocated bpf maps, but not okay for
> > > non-preallocated bpf maps.
> > > Because the bpf maps will continue to charge, but as all its memory
> > > and objcg are reparented, so we have to limit the bpf memory size in
> > > the parent as follows,
> >
> > So you're relying on the memory limit of a dying cgroup?
> 
> No. I didn't say it.  What I said is you can't use a dying cgroup to
> limit it, that's why I said that we have to use parant memcg to limit
> it.
> 
> > Sorry, but I don't think we can seriously discuss such a design.
> > A dying cgroup is invisible for a user, a user can't change any tunables,
> > they have zero visibility into any stats or charges. Why would you do this?
> >
> > If you want the cgroup to be an active part of the memory management
> > process, don't delete it. There are exactly zero guarantees about what
> > happens with a memory cgroup after being deleted by a user, it's all
> > implementation details.
> >
> > Anyway, here is the patch for reparenting bpf maps:
> > https://github.com/rgushchin/linux/commit/f57df8bb35770507a4624fe52216b6c14f39c50c
> >
> > I gonna post it to bpf@ after some testing.
> >
> 
> I will take a look at it.
> But AFAIK the reparenting can't resolve the problem of non-preallocated maps.

Sorry, what's the problem then?

Michal asked how we can prevent an indefinite pinning of a dying memcg by an associated
bpf map being used by other processes, and I guess the objcg-based reparenting is
the best answer here. You said it will complicate the deployment? What does it mean?

From a user's POV there is no visible difference. What am I missing here?
Yes, if we reparent the bpf map, memory.max of the original memory cgroup will
not apply, but as I said, if you want it to be effective, don't delete the cgroup.

Thanks!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-06  4:19                     ` Roman Gushchin
  0 siblings, 0 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-06  4:19 UTC (permalink / raw)
  To: Yafang Shao
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 06, 2022 at 12:02:49PM +0800, Yafang Shao wrote:
> On Wed, Jul 6, 2022 at 11:56 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> >
> > On Wed, Jul 06, 2022 at 11:42:50AM +0800, Yafang Shao wrote:
> > > On Wed, Jul 6, 2022 at 11:28 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > >
> > > > On Wed, Jul 06, 2022 at 10:46:48AM +0800, Yafang Shao wrote:
> > > > > On Wed, Jul 6, 2022 at 4:49 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > > > >
> > > > > > On Mon, Jul 04, 2022 at 05:07:30PM +0200, Michal Hocko wrote:
> > > > > > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > > > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > > > > > > > >
> > > > > > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > > > > > events are not raised.
> > > > > > > > > >
> > > > > > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > > > > > allocations from a process context will trigger the reclaim and
> > > > > > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > > > > > memory cgroup, so it might never happen.
> > > > > > > > >
> > > > > > > > > The patch looks good but the above sentence is confusing. What might
> > > > > > > > > never happen? Reclaim or MAX event on dying memcg?
> > > > > > > >
> > > > > > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > > > > > looking into the code. How about something like this?
> > > > > > > >
> > > > > > > > "It's not/less of an issue in a generic case because consequent
> > > > > > > > allocations from a process context will trigger the direct reclaim
> > > > > > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > > > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > > > > > from a process context and no MEMCG_MAX events will be triggered."
> > > > > > >
> > > > > > > Could you expand little bit more on the situation? Can those charges to
> > > > > > > offline memcg happen indefinetely?
> > > > > >
> > > > > > Yes.
> > > > > >
> > > > > > > How can it ever go away then?
> > > > > >
> > > > > > Bpf map should be deleted by a user first.
> > > > > >
> > > > >
> > > > > It can't apply to pinned bpf maps, because the user expects the bpf
> > > > > maps to continue working after the user agent exits.
> > > > >
> > > > > > > Also is this something that we actually want to encourage?
> > > > > >
> > > > > > Not really. We can implement reparenting (probably objcg-based), I think it's
> > > > > > a good idea in general. I can take a look, but can't promise it will be fast.
> > > > > >
> > > > > > In thory we can't forbid deleting cgroups with associated bpf maps, but I don't
> > > > > > thinks it's a good idea.
> > > > > >
> > > > >
> > > > > Agreed. It is not a good idea.
> > > > >
> > > > > > > In other words shouldn't those remote charges be redirected when the
> > > > > > > target memcg is offline?
> > > > > >
> > > > > > Reparenting is the best answer I have.
> > > > > >
> > > > >
> > > > > At the cost of increasing the complexity of deployment, that may not
> > > > > be a good idea neither.
> > > >
> > > > What do you mean? Can you please elaborate on it?
> > > >
> > >
> > >                    parent memcg
> > >                          |
> > >                     bpf memcg   <- limit the memory size of bpf
> > > programs
> > >                         /           \
> > >          bpf user agent     pinned bpf program
> > >
> > > After bpf user agents exit, the bpf memcg will be dead, and then all
> > > its memory will be reparented.
> > > That is okay for preallocated bpf maps, but not okay for
> > > non-preallocated bpf maps.
> > > Because the bpf maps will continue to charge, but as all its memory
> > > and objcg are reparented, so we have to limit the bpf memory size in
> > > the parent as follows,
> >
> > So you're relying on the memory limit of a dying cgroup?
> 
> No. I didn't say it.  What I said is you can't use a dying cgroup to
> limit it, that's why I said that we have to use parant memcg to limit
> it.
> 
> > Sorry, but I don't think we can seriously discuss such a design.
> > A dying cgroup is invisible for a user, a user can't change any tunables,
> > they have zero visibility into any stats or charges. Why would you do this?
> >
> > If you want the cgroup to be an active part of the memory management
> > process, don't delete it. There are exactly zero guarantees about what
> > happens with a memory cgroup after being deleted by a user, it's all
> > implementation details.
> >
> > Anyway, here is the patch for reparenting bpf maps:
> > https://github.com/rgushchin/linux/commit/f57df8bb35770507a4624fe52216b6c14f39c50c
> >
> > I gonna post it to bpf@ after some testing.
> >
> 
> I will take a look at it.
> But AFAIK the reparenting can't resolve the problem of non-preallocated maps.

Sorry, what's the problem then?

Michal asked how we can prevent an indefinite pinning of a dying memcg by an associated
bpf map being used by other processes, and I guess the objcg-based reparenting is
the best answer here. You said it will complicate the deployment? What does it mean?

From a user's POV there is no visible difference. What am I missing here?
Yes, if we reparent the bpf map, memory.max of the original memory cgroup will
not apply, but as I said, if you want it to be effective, don't delete the cgroup.

Thanks!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
  2022-07-06  4:19                     ` Roman Gushchin
@ 2022-07-06  4:33                       ` Yafang Shao
  -1 siblings, 0 replies; 37+ messages in thread
From: Yafang Shao @ 2022-07-06  4:33 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 6, 2022 at 12:19 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> On Wed, Jul 06, 2022 at 12:02:49PM +0800, Yafang Shao wrote:
> > On Wed, Jul 6, 2022 at 11:56 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > >
> > > On Wed, Jul 06, 2022 at 11:42:50AM +0800, Yafang Shao wrote:
> > > > On Wed, Jul 6, 2022 at 11:28 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > >
> > > > > On Wed, Jul 06, 2022 at 10:46:48AM +0800, Yafang Shao wrote:
> > > > > > On Wed, Jul 6, 2022 at 4:49 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > > > >
> > > > > > > On Mon, Jul 04, 2022 at 05:07:30PM +0200, Michal Hocko wrote:
> > > > > > > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > > > > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > > > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > > > > > > > >
> > > > > > > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > > > > > > events are not raised.
> > > > > > > > > > >
> > > > > > > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > > > > > > allocations from a process context will trigger the reclaim and
> > > > > > > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > > > > > > memory cgroup, so it might never happen.
> > > > > > > > > >
> > > > > > > > > > The patch looks good but the above sentence is confusing. What might
> > > > > > > > > > never happen? Reclaim or MAX event on dying memcg?
> > > > > > > > >
> > > > > > > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > > > > > > looking into the code. How about something like this?
> > > > > > > > >
> > > > > > > > > "It's not/less of an issue in a generic case because consequent
> > > > > > > > > allocations from a process context will trigger the direct reclaim
> > > > > > > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > > > > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > > > > > > from a process context and no MEMCG_MAX events will be triggered."
> > > > > > > >
> > > > > > > > Could you expand little bit more on the situation? Can those charges to
> > > > > > > > offline memcg happen indefinetely?
> > > > > > >
> > > > > > > Yes.
> > > > > > >
> > > > > > > > How can it ever go away then?
> > > > > > >
> > > > > > > Bpf map should be deleted by a user first.
> > > > > > >
> > > > > >
> > > > > > It can't apply to pinned bpf maps, because the user expects the bpf
> > > > > > maps to continue working after the user agent exits.
> > > > > >
> > > > > > > > Also is this something that we actually want to encourage?
> > > > > > >
> > > > > > > Not really. We can implement reparenting (probably objcg-based), I think it's
> > > > > > > a good idea in general. I can take a look, but can't promise it will be fast.
> > > > > > >
> > > > > > > In thory we can't forbid deleting cgroups with associated bpf maps, but I don't
> > > > > > > thinks it's a good idea.
> > > > > > >
> > > > > >
> > > > > > Agreed. It is not a good idea.
> > > > > >
> > > > > > > > In other words shouldn't those remote charges be redirected when the
> > > > > > > > target memcg is offline?
> > > > > > >
> > > > > > > Reparenting is the best answer I have.
> > > > > > >
> > > > > >
> > > > > > At the cost of increasing the complexity of deployment, that may not
> > > > > > be a good idea neither.
> > > > >
> > > > > What do you mean? Can you please elaborate on it?
> > > > >
> > > >
> > > >                    parent memcg
> > > >                          |
> > > >                     bpf memcg   <- limit the memory size of bpf
> > > > programs
> > > >                         /           \
> > > >          bpf user agent     pinned bpf program
> > > >
> > > > After bpf user agents exit, the bpf memcg will be dead, and then all
> > > > its memory will be reparented.
> > > > That is okay for preallocated bpf maps, but not okay for
> > > > non-preallocated bpf maps.
> > > > Because the bpf maps will continue to charge, but as all its memory
> > > > and objcg are reparented, so we have to limit the bpf memory size in
> > > > the parent as follows,
> > >
> > > So you're relying on the memory limit of a dying cgroup?
> >
> > No. I didn't say it.  What I said is you can't use a dying cgroup to
> > limit it, that's why I said that we have to use parant memcg to limit
> > it.
> >
> > > Sorry, but I don't think we can seriously discuss such a design.
> > > A dying cgroup is invisible for a user, a user can't change any tunables,
> > > they have zero visibility into any stats or charges. Why would you do this?
> > >
> > > If you want the cgroup to be an active part of the memory management
> > > process, don't delete it. There are exactly zero guarantees about what
> > > happens with a memory cgroup after being deleted by a user, it's all
> > > implementation details.
> > >
> > > Anyway, here is the patch for reparenting bpf maps:
> > > https://github.com/rgushchin/linux/commit/f57df8bb35770507a4624fe52216b6c14f39c50c
> > >
> > > I gonna post it to bpf@ after some testing.
> > >
> >
> > I will take a look at it.
> > But AFAIK the reparenting can't resolve the problem of non-preallocated maps.
>
> Sorry, what's the problem then?
>

The problem is, the bpf memcg or its parent memcg can't be destroyed currently.
IOW, you have to forbid the user to rmdir.

Reparenting is an improvement for the preallocated bpf map, because
all its memory is charged, so the memg is useless any more.
So it can be destroyed and thus the reparenting is an improvement.

But for the non-preallocated bpf map, the memcg still has to do the
limit work, that means, it can't be destroyed currently.
If you reparent it, then the parent can't be destroyed. So why not
forbid destroying the bpf memcg in the first place?
The reparenting just increases the complexity for this case.

> Michal asked how we can prevent an indefinite pinning of a dying memcg by an associated
> bpf map being used by other processes, and I guess the objcg-based reparenting is
> the best answer here. You said it will complicate the deployment? What does it mean?
>

See my reply above.

> From a user's POV there is no visible difference. What am I missing here?
> Yes, if we reparent the bpf map, memory.max of the original memory cgroup will
> not apply, but as I said, if you want it to be effective, don't delete the cgroup.
>

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-06  4:33                       ` Yafang Shao
  0 siblings, 0 replies; 37+ messages in thread
From: Yafang Shao @ 2022-07-06  4:33 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Michal Hocko, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed, Jul 6, 2022 at 12:19 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
>
> On Wed, Jul 06, 2022 at 12:02:49PM +0800, Yafang Shao wrote:
> > On Wed, Jul 6, 2022 at 11:56 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > >
> > > On Wed, Jul 06, 2022 at 11:42:50AM +0800, Yafang Shao wrote:
> > > > On Wed, Jul 6, 2022 at 11:28 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > > >
> > > > > On Wed, Jul 06, 2022 at 10:46:48AM +0800, Yafang Shao wrote:
> > > > > > On Wed, Jul 6, 2022 at 4:49 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > > > > >
> > > > > > > On Mon, Jul 04, 2022 at 05:07:30PM +0200, Michal Hocko wrote:
> > > > > > > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > > > > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > > > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > > > > > > > > >
> > > > > > > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > > > > > > events are not raised.
> > > > > > > > > > >
> > > > > > > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > > > > > > allocations from a process context will trigger the reclaim and
> > > > > > > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > > > > > > memory cgroup, so it might never happen.
> > > > > > > > > >
> > > > > > > > > > The patch looks good but the above sentence is confusing. What might
> > > > > > > > > > never happen? Reclaim or MAX event on dying memcg?
> > > > > > > > >
> > > > > > > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > > > > > > looking into the code. How about something like this?
> > > > > > > > >
> > > > > > > > > "It's not/less of an issue in a generic case because consequent
> > > > > > > > > allocations from a process context will trigger the direct reclaim
> > > > > > > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > > > > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > > > > > > from a process context and no MEMCG_MAX events will be triggered."
> > > > > > > >
> > > > > > > > Could you expand little bit more on the situation? Can those charges to
> > > > > > > > offline memcg happen indefinetely?
> > > > > > >
> > > > > > > Yes.
> > > > > > >
> > > > > > > > How can it ever go away then?
> > > > > > >
> > > > > > > Bpf map should be deleted by a user first.
> > > > > > >
> > > > > >
> > > > > > It can't apply to pinned bpf maps, because the user expects the bpf
> > > > > > maps to continue working after the user agent exits.
> > > > > >
> > > > > > > > Also is this something that we actually want to encourage?
> > > > > > >
> > > > > > > Not really. We can implement reparenting (probably objcg-based), I think it's
> > > > > > > a good idea in general. I can take a look, but can't promise it will be fast.
> > > > > > >
> > > > > > > In thory we can't forbid deleting cgroups with associated bpf maps, but I don't
> > > > > > > thinks it's a good idea.
> > > > > > >
> > > > > >
> > > > > > Agreed. It is not a good idea.
> > > > > >
> > > > > > > > In other words shouldn't those remote charges be redirected when the
> > > > > > > > target memcg is offline?
> > > > > > >
> > > > > > > Reparenting is the best answer I have.
> > > > > > >
> > > > > >
> > > > > > At the cost of increasing the complexity of deployment, that may not
> > > > > > be a good idea neither.
> > > > >
> > > > > What do you mean? Can you please elaborate on it?
> > > > >
> > > >
> > > >                    parent memcg
> > > >                          |
> > > >                     bpf memcg   <- limit the memory size of bpf
> > > > programs
> > > >                         /           \
> > > >          bpf user agent     pinned bpf program
> > > >
> > > > After bpf user agents exit, the bpf memcg will be dead, and then all
> > > > its memory will be reparented.
> > > > That is okay for preallocated bpf maps, but not okay for
> > > > non-preallocated bpf maps.
> > > > Because the bpf maps will continue to charge, but as all its memory
> > > > and objcg are reparented, so we have to limit the bpf memory size in
> > > > the parent as follows,
> > >
> > > So you're relying on the memory limit of a dying cgroup?
> >
> > No. I didn't say it.  What I said is you can't use a dying cgroup to
> > limit it, that's why I said that we have to use parant memcg to limit
> > it.
> >
> > > Sorry, but I don't think we can seriously discuss such a design.
> > > A dying cgroup is invisible for a user, a user can't change any tunables,
> > > they have zero visibility into any stats or charges. Why would you do this?
> > >
> > > If you want the cgroup to be an active part of the memory management
> > > process, don't delete it. There are exactly zero guarantees about what
> > > happens with a memory cgroup after being deleted by a user, it's all
> > > implementation details.
> > >
> > > Anyway, here is the patch for reparenting bpf maps:
> > > https://github.com/rgushchin/linux/commit/f57df8bb35770507a4624fe52216b6c14f39c50c
> > >
> > > I gonna post it to bpf@ after some testing.
> > >
> >
> > I will take a look at it.
> > But AFAIK the reparenting can't resolve the problem of non-preallocated maps.
>
> Sorry, what's the problem then?
>

The problem is, the bpf memcg or its parent memcg can't be destroyed currently.
IOW, you have to forbid the user to rmdir.

Reparenting is an improvement for the preallocated bpf map, because
all its memory is charged, so the memg is useless any more.
So it can be destroyed and thus the reparenting is an improvement.

But for the non-preallocated bpf map, the memcg still has to do the
limit work, that means, it can't be destroyed currently.
If you reparent it, then the parent can't be destroyed. So why not
forbid destroying the bpf memcg in the first place?
The reparenting just increases the complexity for this case.

> Michal asked how we can prevent an indefinite pinning of a dying memcg by an associated
> bpf map being used by other processes, and I guess the objcg-based reparenting is
> the best answer here. You said it will complicate the deployment? What does it mean?
>

See my reply above.

> From a user's POV there is no visible difference. What am I missing here?
> Yes, if we reparent the bpf map, memory.max of the original memory cgroup will
> not apply, but as I said, if you want it to be effective, don't delete the cgroup.
>

-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-07  7:47               ` Michal Hocko
  0 siblings, 0 replies; 37+ messages in thread
From: Michal Hocko @ 2022-07-07  7:47 UTC (permalink / raw)
  To: Yafang Shao
  Cc: Roman Gushchin, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed 06-07-22 10:40:48, Yafang Shao wrote:
> On Wed, Jul 6, 2022 at 4:52 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> >
> > On Mon, Jul 04, 2022 at 05:30:25PM +0200, Michal Hocko wrote:
> > > On Mon 04-07-22 17:07:32, Michal Hocko wrote:
> > > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > > > > > >
> > > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > > events are not raised.
> > > > > > >
> > > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > > allocations from a process context will trigger the reclaim and
> > > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > > memory cgroup, so it might never happen.
> > > > > >
> > > > > > The patch looks good but the above sentence is confusing. What might
> > > > > > never happen? Reclaim or MAX event on dying memcg?
> > > > >
> > > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > > looking into the code. How about something like this?
> > > > >
> > > > > "It's not/less of an issue in a generic case because consequent
> > > > > allocations from a process context will trigger the direct reclaim
> > > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > > from a process context and no MEMCG_MAX events will be triggered."
> > > >
> > > > Could you expand little bit more on the situation? Can those charges to
> > > > offline memcg happen indefinetely? How can it ever go away then? Also is
> > > > this something that we actually want to encourage?
> > >
> > > One more question. Mostly out of curiosity. How is userspace actually
> > > acting on those events? Are watchers still active on those dead memcgs?
> >
> > Idk, the whole problem was reported by Yafang, so he probably has a better
> > answer. But in general events are recursive and the cgroup doesn't have
> > to be dying, it can be simple abandoned.
> >
> 
> Regarding the pinned bpf programs, it can run without a user agent.
> That means the cgroup may not be dead, but just not populated.
> (But in our case, the cgroup will be deleted after the user agent exits.)

OK, that makes sense.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-07  7:47               ` Michal Hocko
  0 siblings, 0 replies; 37+ messages in thread
From: Michal Hocko @ 2022-07-07  7:47 UTC (permalink / raw)
  To: Yafang Shao
  Cc: Roman Gushchin, Shakeel Butt, Andrew Morton, Johannes Weiner,
	Muchun Song, Cgroups, Linux MM, bpf

On Wed 06-07-22 10:40:48, Yafang Shao wrote:
> On Wed, Jul 6, 2022 at 4:52 AM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> >
> > On Mon, Jul 04, 2022 at 05:30:25PM +0200, Michal Hocko wrote:
> > > On Mon 04-07-22 17:07:32, Michal Hocko wrote:
> > > > On Sat 02-07-22 08:39:14, Roman Gushchin wrote:
> > > > > On Fri, Jul 01, 2022 at 10:50:40PM -0700, Shakeel Butt wrote:
> > > > > > On Fri, Jul 1, 2022 at 8:35 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > > > > > >
> > > > > > > Yafang Shao reported an issue related to the accounting of bpf
> > > > > > > memory: if a bpf map is charged indirectly for memory consumed
> > > > > > > from an interrupt context and allocations are enforced, MEMCG_MAX
> > > > > > > events are not raised.
> > > > > > >
> > > > > > > It's not/less of an issue in a generic case because consequent
> > > > > > > allocations from a process context will trigger the reclaim and
> > > > > > > MEMCG_MAX events. However a bpf map can belong to a dying/abandoned
> > > > > > > memory cgroup, so it might never happen.
> > > > > >
> > > > > > The patch looks good but the above sentence is confusing. What might
> > > > > > never happen? Reclaim or MAX event on dying memcg?
> > > > >
> > > > > Direct reclaim and MAX events. I agree it might be not clear without
> > > > > looking into the code. How about something like this?
> > > > >
> > > > > "It's not/less of an issue in a generic case because consequent
> > > > > allocations from a process context will trigger the direct reclaim
> > > > > and MEMCG_MAX events will be raised. However a bpf map can belong
> > > > > to a dying/abandoned memory cgroup, so there will be no allocations
> > > > > from a process context and no MEMCG_MAX events will be triggered."
> > > >
> > > > Could you expand little bit more on the situation? Can those charges to
> > > > offline memcg happen indefinetely? How can it ever go away then? Also is
> > > > this something that we actually want to encourage?
> > >
> > > One more question. Mostly out of curiosity. How is userspace actually
> > > acting on those events? Are watchers still active on those dead memcgs?
> >
> > Idk, the whole problem was reported by Yafang, so he probably has a better
> > answer. But in general events are recursive and the cgroup doesn't have
> > to be dying, it can be simple abandoned.
> >
> 
> Regarding the pinned bpf programs, it can run without a user agent.
> That means the cgroup may not be dead, but just not populated.
> (But in our case, the cgroup will be deleted after the user agent exits.)

OK, that makes sense.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
  2022-07-06  3:56                 ` Roman Gushchin
  (?)
  (?)
@ 2022-07-07 22:41                 ` Alexei Starovoitov
  2022-07-08  3:18                     ` Roman Gushchin
  -1 siblings, 1 reply; 37+ messages in thread
From: Alexei Starovoitov @ 2022-07-07 22:41 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Yafang Shao, Michal Hocko, Shakeel Butt, Andrew Morton,
	Johannes Weiner, Muchun Song, Cgroups, Linux MM, bpf

On Tue, Jul 5, 2022 at 9:24 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> Anyway, here is the patch for reparenting bpf maps:
> https://github.com/rgushchin/linux/commit/f57df8bb35770507a4624fe52216b6c14f39c50c
>
> I gonna post it to bpf@ after some testing.

Please do. It looks good.
It needs #ifdef CONFIG_MEMCG_KMEM
because get_obj_cgroup_from_current() is undefined otherwise.
Ideally just adding a static inline to a .h ?

and
if (map->objcg)
   memcg = get_mem_cgroup_from_objcg(map->objcg);

or !NULL check inside get_mem_cgroup_from_objcg()
which would be better.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-08  3:18                     ` Roman Gushchin
  0 siblings, 0 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-08  3:18 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Yafang Shao, Michal Hocko, Shakeel Butt, Andrew Morton,
	Johannes Weiner, Muchun Song, Cgroups, Linux MM, bpf

On Thu, Jul 07, 2022 at 03:41:11PM -0700, Alexei Starovoitov wrote:
> On Tue, Jul 5, 2022 at 9:24 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> >
> > Anyway, here is the patch for reparenting bpf maps:
> > https://github.com/rgushchin/linux/commit/f57df8bb35770507a4624fe52216b6c14f39c50c
> >
> > I gonna post it to bpf@ after some testing.
> 
> Please do. It looks good.
> It needs #ifdef CONFIG_MEMCG_KMEM
> because get_obj_cgroup_from_current() is undefined otherwise.
> Ideally just adding a static inline to a .h ?

Actually all call sites are already under CONFIG_MEMCG_KMEM.

> 
> and
> if (map->objcg)
>    memcg = get_mem_cgroup_from_objcg(map->objcg);
> 
> or !NULL check inside get_mem_cgroup_from_objcg()
> which would be better.

Yes, you're right, as now we need to handle it specially.

In the near future it won't be necessary. There are patches in
mm-unstable which make objcg API useful outside of CONFIG_MEMCG_KMEM.
In particular it means that objcg will be created for the root_mem_cgroup.
So map->objcg can always point at a valid objcg and we will be able
to drop this check.

Will post an updated version shortly.

Thanks!

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations
@ 2022-07-08  3:18                     ` Roman Gushchin
  0 siblings, 0 replies; 37+ messages in thread
From: Roman Gushchin @ 2022-07-08  3:18 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Yafang Shao, Michal Hocko, Shakeel Butt, Andrew Morton,
	Johannes Weiner, Muchun Song, Cgroups, Linux MM, bpf

On Thu, Jul 07, 2022 at 03:41:11PM -0700, Alexei Starovoitov wrote:
> On Tue, Jul 5, 2022 at 9:24 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> >
> > Anyway, here is the patch for reparenting bpf maps:
> > https://github.com/rgushchin/linux/commit/f57df8bb35770507a4624fe52216b6c14f39c50c
> >
> > I gonna post it to bpf@ after some testing.
> 
> Please do. It looks good.
> It needs #ifdef CONFIG_MEMCG_KMEM
> because get_obj_cgroup_from_current() is undefined otherwise.
> Ideally just adding a static inline to a .h ?

Actually all call sites are already under CONFIG_MEMCG_KMEM.

> 
> and
> if (map->objcg)
>    memcg = get_mem_cgroup_from_objcg(map->objcg);
> 
> or !NULL check inside get_mem_cgroup_from_objcg()
> which would be better.

Yes, you're right, as now we need to handle it specially.

In the near future it won't be necessary. There are patches in
mm-unstable which make objcg API useful outside of CONFIG_MEMCG_KMEM.
In particular it means that objcg will be created for the root_mem_cgroup.
So map->objcg can always point at a valid objcg and we will be able
to drop this check.

Will post an updated version shortly.

Thanks!

^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2022-07-08  3:19 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-02  3:35 [PATCH] mm: memcontrol: do not miss MEMCG_MAX events for enforced allocations Roman Gushchin
2022-07-02  5:50 ` Shakeel Butt
2022-07-02  5:50   ` Shakeel Butt
2022-07-02 15:39   ` Roman Gushchin
2022-07-02 15:39     ` Roman Gushchin
2022-07-03  5:36     ` Shakeel Butt
2022-07-03  5:36       ` Shakeel Butt
2022-07-03 22:50       ` Roman Gushchin
2022-07-03 22:50         ` Roman Gushchin
2022-07-04 15:07     ` Michal Hocko
2022-07-04 15:30       ` Michal Hocko
2022-07-04 15:30         ` Michal Hocko
2022-07-05 20:51         ` Roman Gushchin
2022-07-05 20:51           ` Roman Gushchin
2022-07-06  2:40           ` Yafang Shao
2022-07-06  2:40             ` Yafang Shao
2022-07-07  7:47             ` Michal Hocko
2022-07-07  7:47               ` Michal Hocko
2022-07-05 20:49       ` Roman Gushchin
2022-07-06  2:46         ` Yafang Shao
2022-07-06  3:28           ` Roman Gushchin
2022-07-06  3:42             ` Yafang Shao
2022-07-06  3:42               ` Yafang Shao
2022-07-06  3:56               ` Roman Gushchin
2022-07-06  3:56                 ` Roman Gushchin
2022-07-06  4:02                 ` Yafang Shao
2022-07-06  4:02                   ` Yafang Shao
2022-07-06  4:19                   ` Roman Gushchin
2022-07-06  4:19                     ` Roman Gushchin
2022-07-06  4:33                     ` Yafang Shao
2022-07-06  4:33                       ` Yafang Shao
2022-07-07 22:41                 ` Alexei Starovoitov
2022-07-08  3:18                   ` Roman Gushchin
2022-07-08  3:18                     ` Roman Gushchin
2022-07-04 15:12 ` Michal Hocko
2022-07-04 15:12   ` Michal Hocko
2022-07-05 20:55   ` Roman Gushchin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.