linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Song Liu <songliubraving@fb.com>
To: Roman Gushchin <guro@fb.com>
Cc: Alexei Starovoitov <ast@kernel.org>, Tejun Heo <tj@kernel.org>,
	"bpf@vger.kernel.org" <bpf@vger.kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Kernel Team <Kernel-team@fb.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2 bpf-next] bpf: fix cgroup bpf release synchronization
Date: Wed, 26 Jun 2019 20:04:39 +0000	[thread overview]
Message-ID: <CFCB43C0-74FB-4E93-9FCA-605A98C4C960@fb.com> (raw)
In-Reply-To: <20190625213858.22459-1-guro@fb.com>



> On Jun 25, 2019, at 2:38 PM, Roman Gushchin <guro@fb.com> wrote:
> 
> Since commit 4bfc0bb2c60e ("bpf: decouple the lifetime of cgroup_bpf
> from cgroup itself"), cgroup_bpf release occurs asynchronously
> (from a worker context), and before the release of the cgroup itself.
> 
> This introduced a previously non-existing race between the release
> and update paths. E.g. if a leaf's cgroup_bpf is released and a new
> bpf program is attached to the one of ancestor cgroups at the same
> time. The race may result in double-free and other memory corruptions.
> 
> To fix the problem, let's protect the body of cgroup_bpf_release()
> with cgroup_mutex, as it was effectively previously, when all this
> code was called from the cgroup release path with cgroup mutex held.
> 
> Also let's skip cgroups, which have no chances to invoke a bpf
> program, on the update path. If the cgroup bpf refcnt reached 0,
> it means that the cgroup is offline (no attached processes), and
> there are no associated sockets left. It means there is no point
> in updating effective progs array! And it can lead to a leak,
> if it happens after the release. So, let's skip such cgroups.
> 
> Big thanks for Tejun Heo for discovering and debugging of this
> problem!
> 
> Fixes: 4bfc0bb2c60e ("bpf: decouple the lifetime of cgroup_bpf from
> cgroup itself")
> Reported-by: Tejun Heo <tj@kernel.org>
> Signed-off-by: Roman Gushchin <guro@fb.com>

LGTM. 

Acked-by: Song Liu <songliubraving@fb.com>


> ---
> kernel/bpf/cgroup.c | 19 ++++++++++++++++++-
> 1 file changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
> index c225c42e114a..077ed3a19848 100644
> --- a/kernel/bpf/cgroup.c
> +++ b/kernel/bpf/cgroup.c
> @@ -16,6 +16,8 @@
> #include <linux/bpf-cgroup.h>
> #include <net/sock.h>
> 
> +#include "../cgroup/cgroup-internal.h"
> +
> DEFINE_STATIC_KEY_FALSE(cgroup_bpf_enabled_key);
> EXPORT_SYMBOL(cgroup_bpf_enabled_key);
> 
> @@ -38,6 +40,8 @@ static void cgroup_bpf_release(struct work_struct *work)
> 	struct bpf_prog_array *old_array;
> 	unsigned int type;
> 
> +	mutex_lock(&cgroup_mutex);
> +
> 	for (type = 0; type < ARRAY_SIZE(cgrp->bpf.progs); type++) {
> 		struct list_head *progs = &cgrp->bpf.progs[type];
> 		struct bpf_prog_list *pl, *tmp;
> @@ -54,10 +58,12 @@ static void cgroup_bpf_release(struct work_struct *work)
> 		}
> 		old_array = rcu_dereference_protected(
> 				cgrp->bpf.effective[type],
> -				percpu_ref_is_dying(&cgrp->bpf.refcnt));
> +				lockdep_is_held(&cgroup_mutex));
> 		bpf_prog_array_free(old_array);
> 	}
> 
> +	mutex_unlock(&cgroup_mutex);
> +
> 	percpu_ref_exit(&cgrp->bpf.refcnt);
> 	cgroup_put(cgrp);
> }
> @@ -229,6 +235,9 @@ static int update_effective_progs(struct cgroup *cgrp,
> 	css_for_each_descendant_pre(css, &cgrp->self) {
> 		struct cgroup *desc = container_of(css, struct cgroup, self);
> 
> +		if (percpu_ref_is_zero(&desc->bpf.refcnt))
> +			continue;
> +
> 		err = compute_effective_progs(desc, type, &desc->bpf.inactive);
> 		if (err)
> 			goto cleanup;
> @@ -238,6 +247,14 @@ static int update_effective_progs(struct cgroup *cgrp,
> 	css_for_each_descendant_pre(css, &cgrp->self) {
> 		struct cgroup *desc = container_of(css, struct cgroup, self);
> 
> +		if (percpu_ref_is_zero(&desc->bpf.refcnt)) {
> +			if (unlikely(desc->bpf.inactive)) {
> +				bpf_prog_array_free(desc->bpf.inactive);
> +				desc->bpf.inactive = NULL;
> +			}
> +			continue;
> +		}
> +
> 		activate_effective_progs(desc, type, desc->bpf.inactive);
> 		desc->bpf.inactive = NULL;
> 	}
> -- 
> 2.21.0
> 


  reply	other threads:[~2019-06-26 20:05 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-25 21:38 [PATCH v2 bpf-next] bpf: fix cgroup bpf release synchronization Roman Gushchin
2019-06-26 20:04 ` Song Liu [this message]
2019-06-26 20:04 ` Alexei Starovoitov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CFCB43C0-74FB-4E93-9FCA-605A98C4C960@fb.com \
    --to=songliubraving@fb.com \
    --cc=Kernel-team@fb.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=guro@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).