All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yafang Shao <laoar.shao@gmail.com>
To: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Andrii Nakryiko <andrii@kernel.org>, Martin Lau <kafai@fb.com>,
	Song Liu <songliubraving@fb.com>, Yonghong Song <yhs@fb.com>,
	john fastabend <john.fastabend@gmail.com>,
	KP Singh <kpsingh@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Christoph Lameter <cl@linux.com>,
	penberg@kernel.org, David Rientjes <rientjes@google.com>,
	iamjoonsoo.kim@lge.com, Vlastimil Babka <vbabka@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Roman Gushchin <guro@fb.com>, Linux MM <linux-mm@kvack.org>,
	netdev <netdev@vger.kernel.org>, bpf <bpf@vger.kernel.org>
Subject: Re: [PATCH RFC 0/9] bpf, mm: recharge bpf memory from offline memcg
Date: Wed, 9 Mar 2022 21:28:58 +0800	[thread overview]
Message-ID: <CALOAHbARWARjK4cAjUfsGDy3G4sAZaHRiFQsbjNc=EfHsCfnnQ@mail.gmail.com> (raw)
In-Reply-To: <Yif+QZbCALQcYrFZ@carbon.dhcp.thefacebook.com>

On Wed, Mar 9, 2022 at 9:09 AM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> On Tue, Mar 08, 2022 at 01:10:47PM +0000, Yafang Shao wrote:
> > When we use memcg to limit the containers which load bpf progs and maps,
> > we find there is an issue that the lifecycle of container and bpf are not
> > always the same, because we may pin the maps and progs while update the
> > container only. So once the container which has alreay pinned progs and
> > maps is restarted, the pinned progs and maps are no longer charged to it
> > any more. In other words, this kind of container can steal memory from the
> > host, that is not expected by us. This patchset means to resolve this
> > issue.
> >
> > After the container is restarted, the old memcg which is charged by the
> > pinned progs and maps will be offline but won't be freed until all of the
> > related maps and progs are freed. If we want to charge these bpf memory to
> > the new started memcg, we should uncharge them from the offline memcg first
> > and then charge it to the new one. As we have already known how the bpf
> > memroy is allocated and freed, we can also know how to charge and uncharge
> > it. This pathset implements various charge and uncharge methords for these
> > memory.
> >
> > Regarding how to do the recharge, we decide to implement new bpf syscalls
> > to do it. With the new implemented bpf syscall, the agent running in the
> > container can use it to do the recharge. As of now we only implement it for
> > the bpf hash maps. Below is a simple example how to do the recharge,
> >
> > ====
> > int main(int argc, char *argv[])
> > {
> >       union bpf_attr attr = {};
> >       int map_id;
> >       int pfd;
> >
> >       if (argc < 2) {
> >               printf("Pls. give a map id \n");
> >               exit(-1);
> >       }
> >
> >       map_id = atoi(argv[1]);
> >       attr.map_id = map_id;
> >       pfd = syscall(SYS_bpf, BPF_MAP_RECHARGE, &attr, sizeof(attr));
> >       if (pfd < 0)
> >               perror("BPF_MAP_RECHARGE");
> >
> >       return 0;
> > }
> >
> > ====
> >
> > Patch #1 and #2 is for the observability, with which we can easily check
> > whether the bpf maps is charged to a memcg and whether the memcg is offline.
> > Patch #3, #4 and #5 is for the charge and uncharge methord for vmalloc-ed,
> > kmalloc-ed and percpu memory.
> > Patch #6~#9 implements the recharge of bpf hash map, which is mostly used
> > by our bpf services. The other maps hasn't been implemented yet. The bpf progs
> > hasn't been implemented neither.
> >
> > This pathset is still a POC now, with limited testing. Any feedback is
> > welcomed.
>
> Hello Yafang!
>
> It's an interesting topic, which goes well beyond bpf. In general, on cgroup
> offlining we either do nothing either recharge pages to the parent cgroup
> (latter is preferred), which helps to release the pinned memcg structure.
>

We have thought about recharging pages to the parent cgroup (the root
memcg in our case),
but it can't resolve our issue.
Releasing the pinned memcg struct is the benefit of recharging pages
to the parent,
but as there won't be too many memcgs pinned by bpf, so it may not be worth it.


> Your approach raises some questions:

Nice questions.

> 1) what if the new cgroup is not large enough to contain the bpf map?

The recharge is supposed to be triggered at the container start time.
After the container is started, the agent which will load the bpf
programs will do it as follows,
1. Check if the bpf program has already been loaded,
    if not,  goto 5.
2. Check if the bpf program will pin maps or progs,
    if not, goto 6.
3. Check if the pinned maps and progs are charged to an offline memcg,
    if not, goto 6.
4. Recharge the pinned maps or progs to the current memcg.
   goto 6.
5. load new bpf program, and also pinned maps and progs if desired.
6. End.

If the recharge fails, it means that the memcg limit is too low, we
should reconsider
the limit of the container.

Regarding other cases that it may do the recharge in the runtime, I
think the failure is
a common OOM case, that means the usage in this container is out of memory, we
should kill something.


> 2) does it mean that some userspace app will monitor the state of the cgroup
> which was the original owner of the bpf map and recharge once it's deleted?

In our use case,  we don't need to monitor that behavior.
The agent which loads the bpf programs has the responsibility to do
the recharge.
As all the agents are controlled by ourselves, it is easy to do it like that.

For more generic use cases, it can do the bpf maintenance in a sidecar container
in the containerized environment.  The admin can provide such sidercar
to bpf owners.
The admin can also introduce an agent on the host to check if there're
maps or progs
charged to an offline memcg and then take the action. It is not easy
to find which one owns
the pinned maps or progs as the pinned path is unique.

> 3) what if there are several cgroups are sharing the same map? who will be
> the next owner?

I think we can follow the same rule that we take care of sharing pages
across memcgs
currently: who loads it first, who owns the map. Then after the first
one exit, the next owner
is who firstly does the recharge.

> 4) because recharging is fully voluntary, why any application should want to do
> it, if it can just use the memory for free? it doesn't really look as a working
> resource control mechanism.
>

As I explained in 2), all the agents are under our control, so we can
easily handle it like that.
For generic use cases, an agent running on the host and sidecar (or
SDK) provided
to bpf users can also handle it.

> Will reparenting work for your case? If not, can you, please, describe the
> problem you're trying to solve by recharging the memory?
>

Reparenting doesn't work for us.
The problem is memory resource control: the limitation on the bpf
containers will be useless
if the lifecycle of bpf progs can containers are not the same.
The containers are always upgraded - IOW restarted - more frequently
than the bpf progs and maps,
that is also one of the reasons why we choose to pin them on the host.

-- 
Thanks
Yafang

  reply	other threads:[~2022-03-09 13:29 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-08 13:10 [PATCH RFC 0/9] bpf, mm: recharge bpf memory from offline memcg Yafang Shao
2022-03-08 13:10 ` [PATCH RFC 1/9] bpftool: fix print error when show bpf man Yafang Shao
2022-03-08 13:10 ` [PATCH RFC 2/9] bpftool: show memcg info of bpf map Yafang Shao
2022-03-08 13:10 ` [PATCH RFC 3/9] mm: add methord to charge kmalloc-ed address Yafang Shao
2022-03-08 13:10 ` [PATCH RFC 4/9] mm: add methord to charge vmalloc-ed address Yafang Shao
2022-03-08 13:10 ` [PATCH RFC 5/9] mm: add methord to charge percpu address Yafang Shao
2022-03-08 13:10 ` [PATCH RFC 6/9] bpf: add a helper to find map by id Yafang Shao
2022-03-08 13:10 ` [PATCH RFC 7/9] bpf: add BPF_MAP_RECHARGE syscall Yafang Shao
2022-03-08 13:10 ` [PATCH RFC 8/9] bpf: make bpf_map_{save, release}_memcg public Yafang Shao
2022-03-08 13:10 ` [PATCH RFC 9/9] bpf: support recharge for hash map Yafang Shao
2022-03-09  1:09 ` [PATCH RFC 0/9] bpf, mm: recharge bpf memory from offline memcg Roman Gushchin
2022-03-09 13:28   ` Yafang Shao [this message]
2022-03-09 23:35     ` Roman Gushchin
2022-03-10 13:20       ` Yafang Shao
2022-03-10 18:00         ` Roman Gushchin
2022-03-11 12:48           ` Yafang Shao
2022-03-11 17:49             ` Roman Gushchin
2022-03-12  6:45               ` Yafang Shao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALOAHbARWARjK4cAjUfsGDy3G4sAZaHRiFQsbjNc=EfHsCfnnQ@mail.gmail.com' \
    --to=laoar.shao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=cl@linux.com \
    --cc=daniel@iogearbox.net \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=john.fastabend@gmail.com \
    --cc=kafai@fb.com \
    --cc=kpsingh@kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=songliubraving@fb.com \
    --cc=vbabka@suse.cz \
    --cc=vdavydov.dev@gmail.com \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.