bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
@ 2019-05-08 17:18 Stanislav Fomichev
  2019-05-08 17:18 ` [PATCH bpf 1/4] " Stanislav Fomichev
                   ` (4 more replies)
  0 siblings, 5 replies; 21+ messages in thread
From: Stanislav Fomichev @ 2019-05-08 17:18 UTC (permalink / raw)
  To: netdev, bpf; +Cc: davem, ast, daniel, Stanislav Fomichev

Right now we are not using rcu api correctly: we pass __rcu pointers
to bpf_prog_array_xyz routines but don't use rcu_dereference on them
(see bpf_prog_array_delete_safe and bpf_prog_array_copy in particular).
Instead of sprinkling rcu_dereferences, let's just get rid of those
__rcu annotations and move rcu handling to a higher level.

It looks like all those routines are called from the rcu update
side and we can use simple rcu_dereference_protected to get a
reference that is valid as long as we hold a mutex (i.e. no other
updater can change the pointer, no need for rcu read section and
there should not be a use-after-free problem).

To be fair, there is currently no issue with the existing approach
since the calls are mutex-protected, pointer values don't change,
__rcu annotations are ignored. But it's still nice to use proper api.

The series fixes the following sparse warnings:

kernel/bpf/core.c:1876:44: warning: incorrect type in initializer (different address spaces)
kernel/bpf/core.c:1876:44:    expected struct bpf_prog_array_item *item
kernel/bpf/core.c:1876:44:    got struct bpf_prog_array_item [noderef] <asn:4> *
kernel/bpf/core.c:1900:26: warning: incorrect type in assignment (different address spaces)
kernel/bpf/core.c:1900:26:    expected struct bpf_prog_array_item *existing
kernel/bpf/core.c:1900:26:    got struct bpf_prog_array_item [noderef] <asn:4> *
kernel/bpf/core.c:1934:26: warning: incorrect type in assignment (different address spaces)
kernel/bpf/core.c:1934:26:    expected struct bpf_prog_array_item *[assigned] existing
kernel/bpf/core.c:1934:26:    got struct bpf_prog_array_item [noderef] <asn:4> *

Stanislav Fomichev (4):
  bpf: remove __rcu annotations from bpf_prog_array
  bpf: media: properly use bpf_prog_array api
  bpf: cgroup: properly use bpf_prog_array api
  bpf: tracing: properly use bpf_prog_array api

 drivers/media/rc/bpf-lirc.c | 27 +++++++++++++++++----------
 include/linux/bpf-cgroup.h  |  2 +-
 include/linux/bpf.h         | 12 ++++++------
 kernel/bpf/cgroup.c         | 27 +++++++++++++++++----------
 kernel/bpf/core.c           | 31 ++++++++++++-------------------
 kernel/trace/bpf_trace.c    | 18 ++++++++++--------
 6 files changed, 63 insertions(+), 54 deletions(-)

-- 
2.21.0.1020.gf2820cf01a-goog

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH bpf 1/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-08 17:18 [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array Stanislav Fomichev
@ 2019-05-08 17:18 ` Stanislav Fomichev
  2019-05-08 17:18 ` [PATCH bpf 2/4] bpf: media: properly use bpf_prog_array api Stanislav Fomichev
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 21+ messages in thread
From: Stanislav Fomichev @ 2019-05-08 17:18 UTC (permalink / raw)
  To: netdev, bpf; +Cc: davem, ast, daniel, Stanislav Fomichev

Drop __rcu annotations and rcu read sections. That's not needed since
all existing callers call those helpers from the rcu update side
and under a mutex. This guarantees that use-after-free could not
happen. In the next patches I'll fix the callers with missing
rcu_dereference_protected to make sparse/lockdep happy.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
---
 include/linux/bpf.h | 12 ++++++------
 kernel/bpf/core.c   | 31 ++++++++++++-------------------
 2 files changed, 18 insertions(+), 25 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 944ccc310201..b90d2859bc60 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -476,17 +476,17 @@ struct bpf_prog_array {
 };
 
 struct bpf_prog_array *bpf_prog_array_alloc(u32 prog_cnt, gfp_t flags);
-void bpf_prog_array_free(struct bpf_prog_array __rcu *progs);
-int bpf_prog_array_length(struct bpf_prog_array __rcu *progs);
-int bpf_prog_array_copy_to_user(struct bpf_prog_array __rcu *progs,
+void bpf_prog_array_free(struct bpf_prog_array *progs);
+int bpf_prog_array_length(struct bpf_prog_array *progs);
+int bpf_prog_array_copy_to_user(struct bpf_prog_array *progs,
 				__u32 __user *prog_ids, u32 cnt);
 
-void bpf_prog_array_delete_safe(struct bpf_prog_array __rcu *progs,
+void bpf_prog_array_delete_safe(struct bpf_prog_array *progs,
 				struct bpf_prog *old_prog);
-int bpf_prog_array_copy_info(struct bpf_prog_array __rcu *array,
+int bpf_prog_array_copy_info(struct bpf_prog_array *array,
 			     u32 *prog_ids, u32 request_cnt,
 			     u32 *prog_cnt);
-int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array,
+int bpf_prog_array_copy(struct bpf_prog_array *old_array,
 			struct bpf_prog *exclude_prog,
 			struct bpf_prog *include_prog,
 			struct bpf_prog_array **new_array);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index ff09d32a8a1b..da03fbc811fd 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1794,38 +1794,33 @@ struct bpf_prog_array *bpf_prog_array_alloc(u32 prog_cnt, gfp_t flags)
 	return &empty_prog_array.hdr;
 }
 
-void bpf_prog_array_free(struct bpf_prog_array __rcu *progs)
+void bpf_prog_array_free(struct bpf_prog_array *progs)
 {
-	if (!progs ||
-	    progs == (struct bpf_prog_array __rcu *)&empty_prog_array.hdr)
+	if (!progs || progs == &empty_prog_array.hdr)
 		return;
 	kfree_rcu(progs, rcu);
 }
 
-int bpf_prog_array_length(struct bpf_prog_array __rcu *array)
+int bpf_prog_array_length(struct bpf_prog_array *array)
 {
 	struct bpf_prog_array_item *item;
 	u32 cnt = 0;
 
-	rcu_read_lock();
-	item = rcu_dereference(array)->items;
-	for (; item->prog; item++)
+	for (item = array->items; item->prog; item++)
 		if (item->prog != &dummy_bpf_prog.prog)
 			cnt++;
-	rcu_read_unlock();
 	return cnt;
 }
 
 
-static bool bpf_prog_array_copy_core(struct bpf_prog_array __rcu *array,
+static bool bpf_prog_array_copy_core(struct bpf_prog_array *array,
 				     u32 *prog_ids,
 				     u32 request_cnt)
 {
 	struct bpf_prog_array_item *item;
 	int i = 0;
 
-	item = rcu_dereference_check(array, 1)->items;
-	for (; item->prog; item++) {
+	for (item = array->items; item->prog; item++) {
 		if (item->prog == &dummy_bpf_prog.prog)
 			continue;
 		prog_ids[i] = item->prog->aux->id;
@@ -1838,7 +1833,7 @@ static bool bpf_prog_array_copy_core(struct bpf_prog_array __rcu *array,
 	return !!(item->prog);
 }
 
-int bpf_prog_array_copy_to_user(struct bpf_prog_array __rcu *array,
+int bpf_prog_array_copy_to_user(struct bpf_prog_array *array,
 				__u32 __user *prog_ids, u32 cnt)
 {
 	unsigned long err = 0;
@@ -1858,9 +1853,7 @@ int bpf_prog_array_copy_to_user(struct bpf_prog_array __rcu *array,
 	ids = kcalloc(cnt, sizeof(u32), GFP_USER | __GFP_NOWARN);
 	if (!ids)
 		return -ENOMEM;
-	rcu_read_lock();
 	nospc = bpf_prog_array_copy_core(array, ids, cnt);
-	rcu_read_unlock();
 	err = copy_to_user(prog_ids, ids, cnt * sizeof(u32));
 	kfree(ids);
 	if (err)
@@ -1870,19 +1863,19 @@ int bpf_prog_array_copy_to_user(struct bpf_prog_array __rcu *array,
 	return 0;
 }
 
-void bpf_prog_array_delete_safe(struct bpf_prog_array __rcu *array,
+void bpf_prog_array_delete_safe(struct bpf_prog_array *array,
 				struct bpf_prog *old_prog)
 {
-	struct bpf_prog_array_item *item = array->items;
+	struct bpf_prog_array_item *item;
 
-	for (; item->prog; item++)
+	for (item = array->items; item->prog; item++)
 		if (item->prog == old_prog) {
 			WRITE_ONCE(item->prog, &dummy_bpf_prog.prog);
 			break;
 		}
 }
 
-int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array,
+int bpf_prog_array_copy(struct bpf_prog_array *old_array,
 			struct bpf_prog *exclude_prog,
 			struct bpf_prog *include_prog,
 			struct bpf_prog_array **new_array)
@@ -1946,7 +1939,7 @@ int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array,
 	return 0;
 }
 
-int bpf_prog_array_copy_info(struct bpf_prog_array __rcu *array,
+int bpf_prog_array_copy_info(struct bpf_prog_array *array,
 			     u32 *prog_ids, u32 request_cnt,
 			     u32 *prog_cnt)
 {
-- 
2.21.0.1020.gf2820cf01a-goog


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf 2/4] bpf: media: properly use bpf_prog_array api
  2019-05-08 17:18 [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array Stanislav Fomichev
  2019-05-08 17:18 ` [PATCH bpf 1/4] " Stanislav Fomichev
@ 2019-05-08 17:18 ` Stanislav Fomichev
  2019-05-08 17:18 ` [PATCH bpf 3/4] bpf: cgroup: " Stanislav Fomichev
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 21+ messages in thread
From: Stanislav Fomichev @ 2019-05-08 17:18 UTC (permalink / raw)
  To: netdev, bpf
  Cc: davem, ast, daniel, Stanislav Fomichev, linux-media, Sean Young

Now that we don't have __rcu markers on the bpf_prog_array helpers,
let's use proper rcu_dereference_protected to obtain array pointer
under mutex.

Cc: linux-media@vger.kernel.org
Cc: Sean Young <sean@mess.org>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
---
 drivers/media/rc/bpf-lirc.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/drivers/media/rc/bpf-lirc.c b/drivers/media/rc/bpf-lirc.c
index 390a722e6211..38d4c01174fc 100644
--- a/drivers/media/rc/bpf-lirc.c
+++ b/drivers/media/rc/bpf-lirc.c
@@ -8,6 +8,9 @@
 #include <linux/bpf_lirc.h>
 #include "rc-core-priv.h"
 
+#define lirc_dereference(p)						\
+	rcu_dereference_protected(p, lockdep_is_held(&ir_raw_handler_lock))
+
 /*
  * BPF interface for raw IR
  */
@@ -130,7 +133,7 @@ const struct bpf_verifier_ops lirc_mode2_verifier_ops = {
 
 static int lirc_bpf_attach(struct rc_dev *rcdev, struct bpf_prog *prog)
 {
-	struct bpf_prog_array __rcu *old_array;
+	struct bpf_prog_array *old_array;
 	struct bpf_prog_array *new_array;
 	struct ir_raw_event_ctrl *raw;
 	int ret;
@@ -148,12 +151,12 @@ static int lirc_bpf_attach(struct rc_dev *rcdev, struct bpf_prog *prog)
 		goto unlock;
 	}
 
-	if (raw->progs && bpf_prog_array_length(raw->progs) >= BPF_MAX_PROGS) {
+	old_array = lirc_dereference(raw->progs);
+	if (old_array && bpf_prog_array_length(old_array) >= BPF_MAX_PROGS) {
 		ret = -E2BIG;
 		goto unlock;
 	}
 
-	old_array = raw->progs;
 	ret = bpf_prog_array_copy(old_array, NULL, prog, &new_array);
 	if (ret < 0)
 		goto unlock;
@@ -168,7 +171,7 @@ static int lirc_bpf_attach(struct rc_dev *rcdev, struct bpf_prog *prog)
 
 static int lirc_bpf_detach(struct rc_dev *rcdev, struct bpf_prog *prog)
 {
-	struct bpf_prog_array __rcu *old_array;
+	struct bpf_prog_array *old_array;
 	struct bpf_prog_array *new_array;
 	struct ir_raw_event_ctrl *raw;
 	int ret;
@@ -186,7 +189,7 @@ static int lirc_bpf_detach(struct rc_dev *rcdev, struct bpf_prog *prog)
 		goto unlock;
 	}
 
-	old_array = raw->progs;
+	old_array = lirc_dereference(raw->progs);
 	ret = bpf_prog_array_copy(old_array, prog, NULL, &new_array);
 	/*
 	 * Do not use bpf_prog_array_delete_safe() as we would end up
@@ -217,21 +220,25 @@ void lirc_bpf_run(struct rc_dev *rcdev, u32 sample)
 /*
  * This should be called once the rc thread has been stopped, so there can be
  * no concurrent bpf execution.
+ *
+ * Should be called with the ir_raw_handler_lock held.
  */
 void lirc_bpf_free(struct rc_dev *rcdev)
 {
 	struct bpf_prog_array_item *item;
+	struct bpf_prog_array *array;
 
-	if (!rcdev->raw->progs)
+	array = lirc_dereference(rcdev->raw->progs);
+	if (!array)
 		return;
 
-	item = rcu_dereference(rcdev->raw->progs)->items;
+	item = array->items;
 	while (item->prog) {
 		bpf_prog_put(item->prog);
 		item++;
 	}
 
-	bpf_prog_array_free(rcdev->raw->progs);
+	bpf_prog_array_free(array);
 }
 
 int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog)
@@ -284,7 +291,7 @@ int lirc_prog_detach(const union bpf_attr *attr)
 int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr)
 {
 	__u32 __user *prog_ids = u64_to_user_ptr(attr->query.prog_ids);
-	struct bpf_prog_array __rcu *progs;
+	struct bpf_prog_array *progs;
 	struct rc_dev *rcdev;
 	u32 cnt, flags = 0;
 	int ret;
@@ -305,7 +312,7 @@ int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr)
 	if (ret)
 		goto put;
 
-	progs = rcdev->raw->progs;
+	progs = lirc_dereference(rcdev->raw->progs);
 	cnt = progs ? bpf_prog_array_length(progs) : 0;
 
 	if (copy_to_user(&uattr->query.prog_cnt, &cnt, sizeof(cnt))) {
-- 
2.21.0.1020.gf2820cf01a-goog


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf 3/4] bpf: cgroup: properly use bpf_prog_array api
  2019-05-08 17:18 [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array Stanislav Fomichev
  2019-05-08 17:18 ` [PATCH bpf 1/4] " Stanislav Fomichev
  2019-05-08 17:18 ` [PATCH bpf 2/4] bpf: media: properly use bpf_prog_array api Stanislav Fomichev
@ 2019-05-08 17:18 ` Stanislav Fomichev
  2019-05-08 17:18 ` [PATCH bpf 4/4] bpf: tracing: " Stanislav Fomichev
  2019-05-08 17:56 ` [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array Alexei Starovoitov
  4 siblings, 0 replies; 21+ messages in thread
From: Stanislav Fomichev @ 2019-05-08 17:18 UTC (permalink / raw)
  To: netdev, bpf; +Cc: davem, ast, daniel, Stanislav Fomichev

Now that we don't have __rcu markers on the bpf_prog_array helpers,
let's use proper rcu_dereference_protected to obtain array pointer
under mutex.

We also don't need __rcu annotations on cgroup_bpf.inactive since
it's not read/updated concurrently.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
---
 include/linux/bpf-cgroup.h |  2 +-
 kernel/bpf/cgroup.c        | 27 +++++++++++++++++----------
 2 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
index a4c644c1c091..5e515b72ff55 100644
--- a/include/linux/bpf-cgroup.h
+++ b/include/linux/bpf-cgroup.h
@@ -69,7 +69,7 @@ struct cgroup_bpf {
 	u32 flags[MAX_BPF_ATTACH_TYPE];
 
 	/* temp storage for effective prog array used by prog_attach/detach */
-	struct bpf_prog_array __rcu *inactive;
+	struct bpf_prog_array *inactive;
 };
 
 void cgroup_bpf_put(struct cgroup *cgrp);
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index 4e807973aa80..d59826add5ef 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -19,6 +19,9 @@
 DEFINE_STATIC_KEY_FALSE(cgroup_bpf_enabled_key);
 EXPORT_SYMBOL(cgroup_bpf_enabled_key);
 
+#define cgroup_dereference(p)						\
+	rcu_dereference_protected(p, lockdep_is_held(&cgroup_mutex))
+
 /**
  * cgroup_bpf_put() - put references of all bpf programs
  * @cgrp: the cgroup to modify
@@ -26,6 +29,7 @@ EXPORT_SYMBOL(cgroup_bpf_enabled_key);
 void cgroup_bpf_put(struct cgroup *cgrp)
 {
 	enum bpf_cgroup_storage_type stype;
+	struct bpf_prog_array *old_array;
 	unsigned int type;
 
 	for (type = 0; type < ARRAY_SIZE(cgrp->bpf.progs); type++) {
@@ -42,7 +46,8 @@ void cgroup_bpf_put(struct cgroup *cgrp)
 			kfree(pl);
 			static_branch_dec(&cgroup_bpf_enabled_key);
 		}
-		bpf_prog_array_free(cgrp->bpf.effective[type]);
+		old_array = cgroup_dereference(cgrp->bpf.effective[type]);
+		bpf_prog_array_free(old_array);
 	}
 }
 
@@ -98,7 +103,7 @@ static bool hierarchy_allows_attach(struct cgroup *cgrp,
  */
 static int compute_effective_progs(struct cgroup *cgrp,
 				   enum bpf_attach_type type,
-				   struct bpf_prog_array __rcu **array)
+				   struct bpf_prog_array **array)
 {
 	enum bpf_cgroup_storage_type stype;
 	struct bpf_prog_array *progs;
@@ -136,17 +141,17 @@ static int compute_effective_progs(struct cgroup *cgrp,
 		}
 	} while ((p = cgroup_parent(p)));
 
-	rcu_assign_pointer(*array, progs);
+	*array = progs;
 	return 0;
 }
 
 static void activate_effective_progs(struct cgroup *cgrp,
 				     enum bpf_attach_type type,
-				     struct bpf_prog_array __rcu *array)
+				     struct bpf_prog_array *array)
 {
-	struct bpf_prog_array __rcu *old_array;
+	struct bpf_prog_array *old_array;
 
-	old_array = xchg(&cgrp->bpf.effective[type], array);
+	old_array = xchg((__force struct bpf_prog_array **)&cgrp->bpf.effective[type], array);
 	/* free prog array after grace period, since __cgroup_bpf_run_*()
 	 * might be still walking the array
 	 */
@@ -163,7 +168,7 @@ int cgroup_bpf_inherit(struct cgroup *cgrp)
  * that array below is variable length
  */
 #define	NR ARRAY_SIZE(cgrp->bpf.effective)
-	struct bpf_prog_array __rcu *arrays[NR] = {};
+	struct bpf_prog_array *arrays[NR] = {};
 	int i;
 
 	for (i = 0; i < NR; i++)
@@ -441,10 +446,13 @@ int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
 	enum bpf_attach_type type = attr->query.attach_type;
 	struct list_head *progs = &cgrp->bpf.progs[type];
 	u32 flags = cgrp->bpf.flags[type];
+	struct bpf_prog_array *effective;
 	int cnt, ret = 0, i;
 
+	effective = cgroup_dereference(cgrp->bpf.effective[type]);
+
 	if (attr->query.query_flags & BPF_F_QUERY_EFFECTIVE)
-		cnt = bpf_prog_array_length(cgrp->bpf.effective[type]);
+		cnt = bpf_prog_array_length(effective);
 	else
 		cnt = prog_list_length(progs);
 
@@ -461,8 +469,7 @@ int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
 	}
 
 	if (attr->query.query_flags & BPF_F_QUERY_EFFECTIVE) {
-		return bpf_prog_array_copy_to_user(cgrp->bpf.effective[type],
-						   prog_ids, cnt);
+		return bpf_prog_array_copy_to_user(effective, prog_ids, cnt);
 	} else {
 		struct bpf_prog_list *pl;
 		u32 id;
-- 
2.21.0.1020.gf2820cf01a-goog


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf 4/4] bpf: tracing: properly use bpf_prog_array api
  2019-05-08 17:18 [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array Stanislav Fomichev
                   ` (2 preceding siblings ...)
  2019-05-08 17:18 ` [PATCH bpf 3/4] bpf: cgroup: " Stanislav Fomichev
@ 2019-05-08 17:18 ` Stanislav Fomichev
  2019-05-08 17:56 ` [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array Alexei Starovoitov
  4 siblings, 0 replies; 21+ messages in thread
From: Stanislav Fomichev @ 2019-05-08 17:18 UTC (permalink / raw)
  To: netdev, bpf; +Cc: davem, ast, daniel, Stanislav Fomichev

Now that we don't have __rcu markers on the bpf_prog_array helpers,
let's use proper rcu_dereference_protected to obtain array pointer
under mutex.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
---
 kernel/trace/bpf_trace.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index d64c00afceb5..5f8f7fdbe27c 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -17,6 +17,9 @@
 #include "trace_probe.h"
 #include "trace.h"
 
+#define bpf_event_dereference(p)					\
+	rcu_dereference_protected(p, lockdep_is_held(&bpf_event_mutex))
+
 #ifdef CONFIG_MODULES
 struct bpf_trace_module {
 	struct module *module;
@@ -999,7 +1002,7 @@ static DEFINE_MUTEX(bpf_event_mutex);
 int perf_event_attach_bpf_prog(struct perf_event *event,
 			       struct bpf_prog *prog)
 {
-	struct bpf_prog_array __rcu *old_array;
+	struct bpf_prog_array *old_array;
 	struct bpf_prog_array *new_array;
 	int ret = -EEXIST;
 
@@ -1017,7 +1020,7 @@ int perf_event_attach_bpf_prog(struct perf_event *event,
 	if (event->prog)
 		goto unlock;
 
-	old_array = event->tp_event->prog_array;
+	old_array = bpf_event_dereference(event->tp_event->prog_array);
 	if (old_array &&
 	    bpf_prog_array_length(old_array) >= BPF_TRACE_MAX_PROGS) {
 		ret = -E2BIG;
@@ -1040,7 +1043,7 @@ int perf_event_attach_bpf_prog(struct perf_event *event,
 
 void perf_event_detach_bpf_prog(struct perf_event *event)
 {
-	struct bpf_prog_array __rcu *old_array;
+	struct bpf_prog_array *old_array;
 	struct bpf_prog_array *new_array;
 	int ret;
 
@@ -1049,7 +1052,7 @@ void perf_event_detach_bpf_prog(struct perf_event *event)
 	if (!event->prog)
 		goto unlock;
 
-	old_array = event->tp_event->prog_array;
+	old_array = bpf_event_dereference(event->tp_event->prog_array);
 	ret = bpf_prog_array_copy(old_array, event->prog, NULL, &new_array);
 	if (ret == -ENOENT)
 		goto unlock;
@@ -1071,6 +1074,7 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
 {
 	struct perf_event_query_bpf __user *uquery = info;
 	struct perf_event_query_bpf query = {};
+	struct bpf_prog_array *progs;
 	u32 *ids, prog_cnt, ids_len;
 	int ret;
 
@@ -1095,10 +1099,8 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
 	 */
 
 	mutex_lock(&bpf_event_mutex);
-	ret = bpf_prog_array_copy_info(event->tp_event->prog_array,
-				       ids,
-				       ids_len,
-				       &prog_cnt);
+	progs = bpf_event_dereference(event->tp_event->prog_array);
+	ret = bpf_prog_array_copy_info(progs, ids, ids_len, &prog_cnt);
 	mutex_unlock(&bpf_event_mutex);
 
 	if (copy_to_user(&uquery->prog_cnt, &prog_cnt, sizeof(prog_cnt)) ||
-- 
2.21.0.1020.gf2820cf01a-goog


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-08 17:18 [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array Stanislav Fomichev
                   ` (3 preceding siblings ...)
  2019-05-08 17:18 ` [PATCH bpf 4/4] bpf: tracing: " Stanislav Fomichev
@ 2019-05-08 17:56 ` Alexei Starovoitov
  2019-05-08 18:12   ` Stanislav Fomichev
  4 siblings, 1 reply; 21+ messages in thread
From: Alexei Starovoitov @ 2019-05-08 17:56 UTC (permalink / raw)
  To: Stanislav Fomichev; +Cc: netdev, bpf, davem, ast, daniel

On Wed, May 08, 2019 at 10:18:41AM -0700, Stanislav Fomichev wrote:
> Right now we are not using rcu api correctly: we pass __rcu pointers
> to bpf_prog_array_xyz routines but don't use rcu_dereference on them
> (see bpf_prog_array_delete_safe and bpf_prog_array_copy in particular).
> Instead of sprinkling rcu_dereferences, let's just get rid of those
> __rcu annotations and move rcu handling to a higher level.
> 
> It looks like all those routines are called from the rcu update
> side and we can use simple rcu_dereference_protected to get a
> reference that is valid as long as we hold a mutex (i.e. no other
> updater can change the pointer, no need for rcu read section and
> there should not be a use-after-free problem).
> 
> To be fair, there is currently no issue with the existing approach
> since the calls are mutex-protected, pointer values don't change,
> __rcu annotations are ignored. But it's still nice to use proper api.
> 
> The series fixes the following sparse warnings:

Absolutely not.
please fix it properly.
Removing annotations is not a fix.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-08 17:56 ` [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array Alexei Starovoitov
@ 2019-05-08 18:12   ` Stanislav Fomichev
  2019-05-13 18:57     ` Stanislav Fomichev
  0 siblings, 1 reply; 21+ messages in thread
From: Stanislav Fomichev @ 2019-05-08 18:12 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: Stanislav Fomichev, netdev, bpf, davem, ast, daniel

On 05/08, Alexei Starovoitov wrote:
> On Wed, May 08, 2019 at 10:18:41AM -0700, Stanislav Fomichev wrote:
> > Right now we are not using rcu api correctly: we pass __rcu pointers
> > to bpf_prog_array_xyz routines but don't use rcu_dereference on them
> > (see bpf_prog_array_delete_safe and bpf_prog_array_copy in particular).
> > Instead of sprinkling rcu_dereferences, let's just get rid of those
> > __rcu annotations and move rcu handling to a higher level.
> > 
> > It looks like all those routines are called from the rcu update
> > side and we can use simple rcu_dereference_protected to get a
> > reference that is valid as long as we hold a mutex (i.e. no other
> > updater can change the pointer, no need for rcu read section and
> > there should not be a use-after-free problem).
> > 
> > To be fair, there is currently no issue with the existing approach
> > since the calls are mutex-protected, pointer values don't change,
> > __rcu annotations are ignored. But it's still nice to use proper api.
> > 
> > The series fixes the following sparse warnings:
> 
> Absolutely not.
> please fix it properly.
> Removing annotations is not a fix.
I'm fixing it properly by removing the annotations and moving lifetime
management to the upper layer. See commits 2-4 where I fix the users, the
first patch is just the "preparation".

The users are supposed to do:

mutex_lock(&x);
p = rcu_dereference_protected(prog_array, lockdep_is_held(&x))
// ...
// call bpf_prog_array helpers while mutex guarantees that
// the object referenced by p is valid (i.e. no need for bpf_prog_array
// helpers to care about rcu lifetime)
// ...
mutex_unlock(&x);

What am I missing here?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-08 18:12   ` Stanislav Fomichev
@ 2019-05-13 18:57     ` Stanislav Fomichev
  2019-05-14 16:55       ` Alexei Starovoitov
  0 siblings, 1 reply; 21+ messages in thread
From: Stanislav Fomichev @ 2019-05-13 18:57 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: Stanislav Fomichev, netdev, bpf, davem, ast, daniel

On 05/08, Stanislav Fomichev wrote:
> On 05/08, Alexei Starovoitov wrote:
> > On Wed, May 08, 2019 at 10:18:41AM -0700, Stanislav Fomichev wrote:
> > > Right now we are not using rcu api correctly: we pass __rcu pointers
> > > to bpf_prog_array_xyz routines but don't use rcu_dereference on them
> > > (see bpf_prog_array_delete_safe and bpf_prog_array_copy in particular).
> > > Instead of sprinkling rcu_dereferences, let's just get rid of those
> > > __rcu annotations and move rcu handling to a higher level.
> > > 
> > > It looks like all those routines are called from the rcu update
> > > side and we can use simple rcu_dereference_protected to get a
> > > reference that is valid as long as we hold a mutex (i.e. no other
> > > updater can change the pointer, no need for rcu read section and
> > > there should not be a use-after-free problem).
> > > 
> > > To be fair, there is currently no issue with the existing approach
> > > since the calls are mutex-protected, pointer values don't change,
> > > __rcu annotations are ignored. But it's still nice to use proper api.
> > > 
> > > The series fixes the following sparse warnings:
> > 
> > Absolutely not.
> > please fix it properly.
> > Removing annotations is not a fix.
> I'm fixing it properly by removing the annotations and moving lifetime
> management to the upper layer. See commits 2-4 where I fix the users, the
> first patch is just the "preparation".
> 
> The users are supposed to do:
> 
> mutex_lock(&x);
> p = rcu_dereference_protected(prog_array, lockdep_is_held(&x))
> // ...
> // call bpf_prog_array helpers while mutex guarantees that
> // the object referenced by p is valid (i.e. no need for bpf_prog_array
> // helpers to care about rcu lifetime)
> // ...
> mutex_unlock(&x);
> 
> What am I missing here?

Just to give you my perspective on why current api with __rcu annotations
is working, but not optimal (even if used from the rcu read section).

Sample code:

	struct bpf_prog_array __rcu *progs = <comes from somewhere>;
	int n;

	rcu_read_lock();
	n = bpf_prog_array_length(progs);
	if (n > 0) {
	  // do something with __rcu progs
	  do_something(progs);
	}
	rcu_read_unlock();

Since progs is __rcu annotated, do_something() would need to do
rcu_dereference again and it might get a different value compared to
whatever bpf_prog_array_free got while doing its dereference.

A better way is not to deal with rcu inside those helpers and let
higher layers do that:

	struct bpf_prog_array __rcu *progs = <comes from somewhere>;
	struct bpf_prog_array *p;
	int n;

	rcu_read_lock();
	p = rcu_dereference(p);
	n = bpf_prog_array_length(p);
	if (n > 0) {
	  do_something(p); // do_something sees the same p as bpf_prog_array_length
	}
	rcu_read_unlock();

What do you think?

If it sounds reasonable, I can follow up with a v2 because I think I can
replace xchg with rcu_swap_protected as well. Or I can resend for bpf-next to
have another round of discussion. Thoughts?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-13 18:57     ` Stanislav Fomichev
@ 2019-05-14 16:55       ` Alexei Starovoitov
  2019-05-14 17:30         ` Stanislav Fomichev
  0 siblings, 1 reply; 21+ messages in thread
From: Alexei Starovoitov @ 2019-05-14 16:55 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Stanislav Fomichev, Network Development, bpf, David S. Miller,
	Alexei Starovoitov, Daniel Borkmann

On Mon, May 13, 2019 at 11:57 AM Stanislav Fomichev <sdf@fomichev.me> wrote:
>
> On 05/08, Stanislav Fomichev wrote:
> > On 05/08, Alexei Starovoitov wrote:
> > > On Wed, May 08, 2019 at 10:18:41AM -0700, Stanislav Fomichev wrote:
> > > > Right now we are not using rcu api correctly: we pass __rcu pointers
> > > > to bpf_prog_array_xyz routines but don't use rcu_dereference on them
> > > > (see bpf_prog_array_delete_safe and bpf_prog_array_copy in particular).
> > > > Instead of sprinkling rcu_dereferences, let's just get rid of those
> > > > __rcu annotations and move rcu handling to a higher level.
> > > >
> > > > It looks like all those routines are called from the rcu update
> > > > side and we can use simple rcu_dereference_protected to get a
> > > > reference that is valid as long as we hold a mutex (i.e. no other
> > > > updater can change the pointer, no need for rcu read section and
> > > > there should not be a use-after-free problem).
> > > >
> > > > To be fair, there is currently no issue with the existing approach
> > > > since the calls are mutex-protected, pointer values don't change,
> > > > __rcu annotations are ignored. But it's still nice to use proper api.
> > > >
> > > > The series fixes the following sparse warnings:
> > >
> > > Absolutely not.
> > > please fix it properly.
> > > Removing annotations is not a fix.
> > I'm fixing it properly by removing the annotations and moving lifetime
> > management to the upper layer. See commits 2-4 where I fix the users, the
> > first patch is just the "preparation".
> >
> > The users are supposed to do:
> >
> > mutex_lock(&x);
> > p = rcu_dereference_protected(prog_array, lockdep_is_held(&x))
> > // ...
> > // call bpf_prog_array helpers while mutex guarantees that
> > // the object referenced by p is valid (i.e. no need for bpf_prog_array
> > // helpers to care about rcu lifetime)
> > // ...
> > mutex_unlock(&x);
> >
> > What am I missing here?
>
> Just to give you my perspective on why current api with __rcu annotations
> is working, but not optimal (even if used from the rcu read section).
>
> Sample code:
>
>         struct bpf_prog_array __rcu *progs = <comes from somewhere>;
>         int n;
>
>         rcu_read_lock();
>         n = bpf_prog_array_length(progs);
>         if (n > 0) {
>           // do something with __rcu progs
>           do_something(progs);
>         }
>         rcu_read_unlock();
>
> Since progs is __rcu annotated, do_something() would need to do
> rcu_dereference again and it might get a different value compared to
> whatever bpf_prog_array_free got while doing its dereference.

correct and I believe the code deals with it fine.
cnt could be different between two calls.

> A better way is not to deal with rcu inside those helpers and let
> higher layers do that:
>
>         struct bpf_prog_array __rcu *progs = <comes from somewhere>;
>         struct bpf_prog_array *p;
>         int n;
>
>         rcu_read_lock();
>         p = rcu_dereference(p);
>         n = bpf_prog_array_length(p);
>         if (n > 0) {
>           do_something(p); // do_something sees the same p as bpf_prog_array_length
>         }
>         rcu_read_unlock();
>
> What do you think?

I'm not sure that generically applicable.
Which piece of code do you have in mind for such refactoring?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-14 16:55       ` Alexei Starovoitov
@ 2019-05-14 17:30         ` Stanislav Fomichev
  2019-05-14 17:45           ` Alexei Starovoitov
  0 siblings, 1 reply; 21+ messages in thread
From: Stanislav Fomichev @ 2019-05-14 17:30 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Stanislav Fomichev, Network Development, bpf, David S. Miller,
	Alexei Starovoitov, Daniel Borkmann

On 05/14, Alexei Starovoitov wrote:
> On Mon, May 13, 2019 at 11:57 AM Stanislav Fomichev <sdf@fomichev.me> wrote:
> >
> > On 05/08, Stanislav Fomichev wrote:
> > > On 05/08, Alexei Starovoitov wrote:
> > > > On Wed, May 08, 2019 at 10:18:41AM -0700, Stanislav Fomichev wrote:
> > > > > Right now we are not using rcu api correctly: we pass __rcu pointers
> > > > > to bpf_prog_array_xyz routines but don't use rcu_dereference on them
> > > > > (see bpf_prog_array_delete_safe and bpf_prog_array_copy in particular).
> > > > > Instead of sprinkling rcu_dereferences, let's just get rid of those
> > > > > __rcu annotations and move rcu handling to a higher level.
> > > > >
> > > > > It looks like all those routines are called from the rcu update
> > > > > side and we can use simple rcu_dereference_protected to get a
> > > > > reference that is valid as long as we hold a mutex (i.e. no other
> > > > > updater can change the pointer, no need for rcu read section and
> > > > > there should not be a use-after-free problem).
> > > > >
> > > > > To be fair, there is currently no issue with the existing approach
> > > > > since the calls are mutex-protected, pointer values don't change,
> > > > > __rcu annotations are ignored. But it's still nice to use proper api.
> > > > >
> > > > > The series fixes the following sparse warnings:
> > > >
> > > > Absolutely not.
> > > > please fix it properly.
> > > > Removing annotations is not a fix.
> > > I'm fixing it properly by removing the annotations and moving lifetime
> > > management to the upper layer. See commits 2-4 where I fix the users, the
> > > first patch is just the "preparation".
> > >
> > > The users are supposed to do:
> > >
> > > mutex_lock(&x);
> > > p = rcu_dereference_protected(prog_array, lockdep_is_held(&x))
> > > // ...
> > > // call bpf_prog_array helpers while mutex guarantees that
> > > // the object referenced by p is valid (i.e. no need for bpf_prog_array
> > > // helpers to care about rcu lifetime)
> > > // ...
> > > mutex_unlock(&x);
> > >
> > > What am I missing here?
> >
> > Just to give you my perspective on why current api with __rcu annotations
> > is working, but not optimal (even if used from the rcu read section).
> >
> > Sample code:
> >
> >         struct bpf_prog_array __rcu *progs = <comes from somewhere>;
> >         int n;
> >
> >         rcu_read_lock();
> >         n = bpf_prog_array_length(progs);
> >         if (n > 0) {
> >           // do something with __rcu progs
> >           do_something(progs);
> >         }
> >         rcu_read_unlock();
> >
> > Since progs is __rcu annotated, do_something() would need to do
> > rcu_dereference again and it might get a different value compared to
> > whatever bpf_prog_array_free got while doing its dereference.
> 
> correct and I believe the code deals with it fine.
> cnt could be different between two calls.
Yes, currently there is no problem, all users of these apis
are fine because they are holding a mutex (and are hence in the rcu update
path, i.e. the pointer can't change and they have a consistent view
between the calls).

For example, we currently have:

	int n1, n2;
	mutex_lock(&x);
	n1 = bpf_prog_array_length(progs);
	n2 = bpf_prog_array_length(progs);
	// n1 is guaranteed to be the same as n2 as long as we
	// hold a mutex; single updater only
	...
	mutex_unlock(&x);

Versus:

	rcu_read_lock();
	n1 = bpf_prog_array_length(progs);
	n2 = bpf_prog_array_length(progs);
	// n1 and n2 can be different; rcu_read_lock is all about
	// lifetime
	...
	rcu_read_unlock();

But, as I said, we don't use those __rcu annotated bpf_prog_array
routines in the rcu read section, so we are fine.

(I'm just showing that potentially there might be a problem if we don't move
rcu management away from bpf_prog_array routines and if someone
decides to call them under rcu_read_lock).

> > A better way is not to deal with rcu inside those helpers and let
> > higher layers do that:
> >
> >         struct bpf_prog_array __rcu *progs = <comes from somewhere>;
> >         struct bpf_prog_array *p;
> >         int n;
> >
> >         rcu_read_lock();
> >         p = rcu_dereference(p);
> >         n = bpf_prog_array_length(p);
> >         if (n > 0) {
> >           do_something(p); // do_something sees the same p as bpf_prog_array_length
> >         }
> >         rcu_read_unlock();
> >
> > What do you think?
> 
> I'm not sure that generically applicable.
> Which piece of code do you have in mind for such refactoring?
All existing callers of (take a look at patches 2-4):

* bpf_prog_array_free
* bpf_prog_array_length
* bpf_prog_array_copy_to_user
* bpf_prog_array_delete_safe
* bpf_prog_array_copy_info
* bpf_prog_array_copy

They are:

* perf event bpf attach/detach/query (under bpf_event_mutex)
* cgroup attach/detach/query (management of cgroup_bpf->effective, under
  cgroup_mutex)
* lirc bpf attach/detach/query (under ir_raw_handler_lock)

Nobody uses these apis in the rcu read section, so we can remove the
annotations and use rcu_dereference_protected on the higher layer.

Side bonus is that we can also remove __rcu from cgroup_bpf.inactive
(which is just a temp storage and not updated in the rcu fashion) and
from old_array in activate_effective_progs (which is an on-stack
array and, I guess, only has __rcu annotation to satisfy sparse).

So nothing is changing functionality-wise, but it becomes a bit easier
to reason about by being more explicit with rcu api.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-14 17:30         ` Stanislav Fomichev
@ 2019-05-14 17:45           ` Alexei Starovoitov
  2019-05-14 17:53             ` Stanislav Fomichev
  0 siblings, 1 reply; 21+ messages in thread
From: Alexei Starovoitov @ 2019-05-14 17:45 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Stanislav Fomichev, Network Development, bpf, David S. Miller,
	Alexei Starovoitov, Daniel Borkmann

On Tue, May 14, 2019 at 10:30:02AM -0700, Stanislav Fomichev wrote:
> On 05/14, Alexei Starovoitov wrote:
> > On Mon, May 13, 2019 at 11:57 AM Stanislav Fomichev <sdf@fomichev.me> wrote:
> > >
> > > On 05/08, Stanislav Fomichev wrote:
> > > > On 05/08, Alexei Starovoitov wrote:
> > > > > On Wed, May 08, 2019 at 10:18:41AM -0700, Stanislav Fomichev wrote:
> > > > > > Right now we are not using rcu api correctly: we pass __rcu pointers
> > > > > > to bpf_prog_array_xyz routines but don't use rcu_dereference on them
> > > > > > (see bpf_prog_array_delete_safe and bpf_prog_array_copy in particular).
> > > > > > Instead of sprinkling rcu_dereferences, let's just get rid of those
> > > > > > __rcu annotations and move rcu handling to a higher level.
> > > > > >
> > > > > > It looks like all those routines are called from the rcu update
> > > > > > side and we can use simple rcu_dereference_protected to get a
> > > > > > reference that is valid as long as we hold a mutex (i.e. no other
> > > > > > updater can change the pointer, no need for rcu read section and
> > > > > > there should not be a use-after-free problem).
> > > > > >
> > > > > > To be fair, there is currently no issue with the existing approach
> > > > > > since the calls are mutex-protected, pointer values don't change,
> > > > > > __rcu annotations are ignored. But it's still nice to use proper api.
> > > > > >
> > > > > > The series fixes the following sparse warnings:
> > > > >
> > > > > Absolutely not.
> > > > > please fix it properly.
> > > > > Removing annotations is not a fix.
> > > > I'm fixing it properly by removing the annotations and moving lifetime
> > > > management to the upper layer. See commits 2-4 where I fix the users, the
> > > > first patch is just the "preparation".
> > > >
> > > > The users are supposed to do:
> > > >
> > > > mutex_lock(&x);
> > > > p = rcu_dereference_protected(prog_array, lockdep_is_held(&x))
> > > > // ...
> > > > // call bpf_prog_array helpers while mutex guarantees that
> > > > // the object referenced by p is valid (i.e. no need for bpf_prog_array
> > > > // helpers to care about rcu lifetime)
> > > > // ...
> > > > mutex_unlock(&x);
> > > >
> > > > What am I missing here?
> > >
> > > Just to give you my perspective on why current api with __rcu annotations
> > > is working, but not optimal (even if used from the rcu read section).
> > >
> > > Sample code:
> > >
> > >         struct bpf_prog_array __rcu *progs = <comes from somewhere>;
> > >         int n;
> > >
> > >         rcu_read_lock();
> > >         n = bpf_prog_array_length(progs);
> > >         if (n > 0) {
> > >           // do something with __rcu progs
> > >           do_something(progs);
> > >         }
> > >         rcu_read_unlock();
> > >
> > > Since progs is __rcu annotated, do_something() would need to do
> > > rcu_dereference again and it might get a different value compared to
> > > whatever bpf_prog_array_free got while doing its dereference.
> > 
> > correct and I believe the code deals with it fine.
> > cnt could be different between two calls.
> Yes, currently there is no problem, all users of these apis
> are fine because they are holding a mutex (and are hence in the rcu update
> path, i.e. the pointer can't change and they have a consistent view
> between the calls).
> 
> For example, we currently have:
> 
> 	int n1, n2;
> 	mutex_lock(&x);
> 	n1 = bpf_prog_array_length(progs);
> 	n2 = bpf_prog_array_length(progs);
> 	// n1 is guaranteed to be the same as n2 as long as we
> 	// hold a mutex; single updater only
> 	...
> 	mutex_unlock(&x);
> 
> Versus:
> 
> 	rcu_read_lock();
> 	n1 = bpf_prog_array_length(progs);
> 	n2 = bpf_prog_array_length(progs);
> 	// n1 and n2 can be different; rcu_read_lock is all about
> 	// lifetime
> 	...
> 	rcu_read_unlock();
> 
> But, as I said, we don't use those __rcu annotated bpf_prog_array
> routines in the rcu read section, so we are fine.
> 
> (I'm just showing that potentially there might be a problem if we don't move
> rcu management away from bpf_prog_array routines and if someone
> decides to call them under rcu_read_lock).
> 
> > > A better way is not to deal with rcu inside those helpers and let
> > > higher layers do that:
> > >
> > >         struct bpf_prog_array __rcu *progs = <comes from somewhere>;
> > >         struct bpf_prog_array *p;
> > >         int n;
> > >
> > >         rcu_read_lock();
> > >         p = rcu_dereference(p);
> > >         n = bpf_prog_array_length(p);
> > >         if (n > 0) {
> > >           do_something(p); // do_something sees the same p as bpf_prog_array_length
> > >         }
> > >         rcu_read_unlock();
> > >
> > > What do you think?
> > 
> > I'm not sure that generically applicable.
> > Which piece of code do you have in mind for such refactoring?
> All existing callers of (take a look at patches 2-4):
> 
> * bpf_prog_array_free
> * bpf_prog_array_length
> * bpf_prog_array_copy_to_user
> * bpf_prog_array_delete_safe
> * bpf_prog_array_copy_info
> * bpf_prog_array_copy
> 
> They are:
> 
> * perf event bpf attach/detach/query (under bpf_event_mutex)
> * cgroup attach/detach/query (management of cgroup_bpf->effective, under
>   cgroup_mutex)
> * lirc bpf attach/detach/query (under ir_raw_handler_lock)
> 
> Nobody uses these apis in the rcu read section, so we can remove the
> annotations and use rcu_dereference_protected on the higher layer.
> 
> Side bonus is that we can also remove __rcu from cgroup_bpf.inactive
> (which is just a temp storage and not updated in the rcu fashion) and
> from old_array in activate_effective_progs (which is an on-stack
> array and, I guess, only has __rcu annotation to satisfy sparse).
> 
> So nothing is changing functionality-wise, but it becomes a bit easier
> to reason about by being more explicit with rcu api.

disagree. mutex is necesary.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-14 17:45           ` Alexei Starovoitov
@ 2019-05-14 17:53             ` Stanislav Fomichev
  2019-05-15  1:25               ` Alexei Starovoitov
  0 siblings, 1 reply; 21+ messages in thread
From: Stanislav Fomichev @ 2019-05-14 17:53 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Stanislav Fomichev, Network Development, bpf, David S. Miller,
	Alexei Starovoitov, Daniel Borkmann

On 05/14, Alexei Starovoitov wrote:
> On Tue, May 14, 2019 at 10:30:02AM -0700, Stanislav Fomichev wrote:
> > On 05/14, Alexei Starovoitov wrote:
> > > On Mon, May 13, 2019 at 11:57 AM Stanislav Fomichev <sdf@fomichev.me> wrote:
> > > >
> > > > On 05/08, Stanislav Fomichev wrote:
> > > > > On 05/08, Alexei Starovoitov wrote:
> > > > > > On Wed, May 08, 2019 at 10:18:41AM -0700, Stanislav Fomichev wrote:
> > > > > > > Right now we are not using rcu api correctly: we pass __rcu pointers
> > > > > > > to bpf_prog_array_xyz routines but don't use rcu_dereference on them
> > > > > > > (see bpf_prog_array_delete_safe and bpf_prog_array_copy in particular).
> > > > > > > Instead of sprinkling rcu_dereferences, let's just get rid of those
> > > > > > > __rcu annotations and move rcu handling to a higher level.
> > > > > > >
> > > > > > > It looks like all those routines are called from the rcu update
> > > > > > > side and we can use simple rcu_dereference_protected to get a
> > > > > > > reference that is valid as long as we hold a mutex (i.e. no other
> > > > > > > updater can change the pointer, no need for rcu read section and
> > > > > > > there should not be a use-after-free problem).
> > > > > > >
> > > > > > > To be fair, there is currently no issue with the existing approach
> > > > > > > since the calls are mutex-protected, pointer values don't change,
> > > > > > > __rcu annotations are ignored. But it's still nice to use proper api.
> > > > > > >
> > > > > > > The series fixes the following sparse warnings:
> > > > > >
> > > > > > Absolutely not.
> > > > > > please fix it properly.
> > > > > > Removing annotations is not a fix.
> > > > > I'm fixing it properly by removing the annotations and moving lifetime
> > > > > management to the upper layer. See commits 2-4 where I fix the users, the
> > > > > first patch is just the "preparation".
> > > > >
> > > > > The users are supposed to do:
> > > > >
> > > > > mutex_lock(&x);
> > > > > p = rcu_dereference_protected(prog_array, lockdep_is_held(&x))
> > > > > // ...
> > > > > // call bpf_prog_array helpers while mutex guarantees that
> > > > > // the object referenced by p is valid (i.e. no need for bpf_prog_array
> > > > > // helpers to care about rcu lifetime)
> > > > > // ...
> > > > > mutex_unlock(&x);
> > > > >
> > > > > What am I missing here?
> > > >
> > > > Just to give you my perspective on why current api with __rcu annotations
> > > > is working, but not optimal (even if used from the rcu read section).
> > > >
> > > > Sample code:
> > > >
> > > >         struct bpf_prog_array __rcu *progs = <comes from somewhere>;
> > > >         int n;
> > > >
> > > >         rcu_read_lock();
> > > >         n = bpf_prog_array_length(progs);
> > > >         if (n > 0) {
> > > >           // do something with __rcu progs
> > > >           do_something(progs);
> > > >         }
> > > >         rcu_read_unlock();
> > > >
> > > > Since progs is __rcu annotated, do_something() would need to do
> > > > rcu_dereference again and it might get a different value compared to
> > > > whatever bpf_prog_array_free got while doing its dereference.
> > > 
> > > correct and I believe the code deals with it fine.
> > > cnt could be different between two calls.
> > Yes, currently there is no problem, all users of these apis
> > are fine because they are holding a mutex (and are hence in the rcu update
> > path, i.e. the pointer can't change and they have a consistent view
> > between the calls).
> > 
> > For example, we currently have:
> > 
> > 	int n1, n2;
> > 	mutex_lock(&x);
> > 	n1 = bpf_prog_array_length(progs);
> > 	n2 = bpf_prog_array_length(progs);
> > 	// n1 is guaranteed to be the same as n2 as long as we
> > 	// hold a mutex; single updater only
> > 	...
> > 	mutex_unlock(&x);
> > 
> > Versus:
> > 
> > 	rcu_read_lock();
> > 	n1 = bpf_prog_array_length(progs);
> > 	n2 = bpf_prog_array_length(progs);
> > 	// n1 and n2 can be different; rcu_read_lock is all about
> > 	// lifetime
> > 	...
> > 	rcu_read_unlock();
> > 
> > But, as I said, we don't use those __rcu annotated bpf_prog_array
> > routines in the rcu read section, so we are fine.
> > 
> > (I'm just showing that potentially there might be a problem if we don't move
> > rcu management away from bpf_prog_array routines and if someone
> > decides to call them under rcu_read_lock).
> > 
> > > > A better way is not to deal with rcu inside those helpers and let
> > > > higher layers do that:
> > > >
> > > >         struct bpf_prog_array __rcu *progs = <comes from somewhere>;
> > > >         struct bpf_prog_array *p;
> > > >         int n;
> > > >
> > > >         rcu_read_lock();
> > > >         p = rcu_dereference(p);
> > > >         n = bpf_prog_array_length(p);
> > > >         if (n > 0) {
> > > >           do_something(p); // do_something sees the same p as bpf_prog_array_length
> > > >         }
> > > >         rcu_read_unlock();
> > > >
> > > > What do you think?
> > > 
> > > I'm not sure that generically applicable.
> > > Which piece of code do you have in mind for such refactoring?
> > All existing callers of (take a look at patches 2-4):
> > 
> > * bpf_prog_array_free
> > * bpf_prog_array_length
> > * bpf_prog_array_copy_to_user
> > * bpf_prog_array_delete_safe
> > * bpf_prog_array_copy_info
> > * bpf_prog_array_copy
> > 
> > They are:
> > 
> > * perf event bpf attach/detach/query (under bpf_event_mutex)
> > * cgroup attach/detach/query (management of cgroup_bpf->effective, under
> >   cgroup_mutex)
> > * lirc bpf attach/detach/query (under ir_raw_handler_lock)
> > 
> > Nobody uses these apis in the rcu read section, so we can remove the
> > annotations and use rcu_dereference_protected on the higher layer.
> > 
> > Side bonus is that we can also remove __rcu from cgroup_bpf.inactive
> > (which is just a temp storage and not updated in the rcu fashion) and
> > from old_array in activate_effective_progs (which is an on-stack
> > array and, I guess, only has __rcu annotation to satisfy sparse).
> > 
> > So nothing is changing functionality-wise, but it becomes a bit easier
> > to reason about by being more explicit with rcu api.
> 
> disagree. mutex is necesary.
Oh, for sure, mutex is necessary, I'm not taking away the mutex.

All I'm doing is I'm asking the users of those apis to be more
explicit by using rcu_dereference_protected(progs, lockdep_is_held(mtx)),
which can be read as "I know I'm holding a mutex and I'm in rcu
update section, progs pointer is not going to change. So as long
as I'm holding a mutex, I can call bpf_prog_array helpers and
get consistent result between the calls".

Existing __rcu annotations don't add anything to the safety.
See, for example, bpf_prog_array_copy_to_user and bpf_prog_array_length
which currently do rcu_read_lock/rcu_read_unlock inside. If we expect
the callers to hold a mutex, why do we start rcu-read section inside
and rcu-update section?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-14 17:53             ` Stanislav Fomichev
@ 2019-05-15  1:25               ` Alexei Starovoitov
  2019-05-15  2:11                 ` Stanislav Fomichev
  0 siblings, 1 reply; 21+ messages in thread
From: Alexei Starovoitov @ 2019-05-15  1:25 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Stanislav Fomichev, Network Development, bpf, David S. Miller,
	Alexei Starovoitov, Daniel Borkmann

On Tue, May 14, 2019 at 10:53 AM Stanislav Fomichev <sdf@fomichev.me> wrote:
>
> Existing __rcu annotations don't add anything to the safety.

what do you mean?
BPF_PROG_RUN_ARRAY derefs these pointers under rcu.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-15  1:25               ` Alexei Starovoitov
@ 2019-05-15  2:11                 ` Stanislav Fomichev
  2019-05-15  2:27                   ` Alexei Starovoitov
  0 siblings, 1 reply; 21+ messages in thread
From: Stanislav Fomichev @ 2019-05-15  2:11 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Stanislav Fomichev, Network Development, bpf, David S. Miller,
	Alexei Starovoitov, Daniel Borkmann

On 05/14, Alexei Starovoitov wrote:
> On Tue, May 14, 2019 at 10:53 AM Stanislav Fomichev <sdf@fomichev.me> wrote:
> >
> > Existing __rcu annotations don't add anything to the safety.
> 
> what do you mean?
> BPF_PROG_RUN_ARRAY derefs these pointers under rcu.
And I'm not removing them from the struct definitions, I'm removing __rcu
from the helpers' arguments only. Because those helpers are always called
with the mutex and don't need it. To reiterate: rcu_dereference_protected
is enough to get a pointer (from __rcu annotated) for the duration
of the mutex, helpers can operate on the non-annotated (dereferenced) prog
array.

Read section still does the following (BPF_PROG_RUN_ARRAY):

	rcu_read_lock();
	p = rcu_dereference(__rcu'd progs);
	while (p) {}
	rcu_read_unlock();

And write sections do:

	mutex_lock(&mtx);
	p = rcu_dereference_protected(__rcu'd progs, lockdep_is_held(&mtx);
	// ^^^ does rcu_dereference in the mutex protected section
	bpf_prog_array_length(p);
	bpf_prog_array_copy_to_user(p, ...);
	bpf_prog_array_delete_safe(p);
	bpf_prog_array_copy_info(p);
	bpf_prog_array_copy(p, ...);
	bpf_prog_array_free(p);
	// ^^^ all these helpers are consistent already with or
	// without __rcu annotation because we hold a mutex and
	// guarantee no concurrent updates, so __rcu annotations
	// for their input arguments is not needed.
	mutex_unlock(&mtx);

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-15  2:11                 ` Stanislav Fomichev
@ 2019-05-15  2:27                   ` Alexei Starovoitov
  2019-05-15  2:44                     ` Eric Dumazet
  0 siblings, 1 reply; 21+ messages in thread
From: Alexei Starovoitov @ 2019-05-15  2:27 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Stanislav Fomichev, Network Development, bpf, David S. Miller,
	Alexei Starovoitov, Daniel Borkmann

On Tue, May 14, 2019 at 7:11 PM Stanislav Fomichev <sdf@fomichev.me> wrote:
>
> On 05/14, Alexei Starovoitov wrote:
> > On Tue, May 14, 2019 at 10:53 AM Stanislav Fomichev <sdf@fomichev.me> wrote:
> > >
> > > Existing __rcu annotations don't add anything to the safety.
> >
> > what do you mean?
> > BPF_PROG_RUN_ARRAY derefs these pointers under rcu.
> And I'm not removing them from the struct definitions, I'm removing __rcu
> from the helpers' arguments only. Because those helpers are always called
> with the mutex and don't need it. To reiterate: rcu_dereference_protected
> is enough to get a pointer (from __rcu annotated) for the duration
> of the mutex, helpers can operate on the non-annotated (dereferenced) prog
> array.
>
> Read section still does the following (BPF_PROG_RUN_ARRAY):
>
>         rcu_read_lock();
>         p = rcu_dereference(__rcu'd progs);
>         while (p) {}
>         rcu_read_unlock();
>
> And write sections do:
>
>         mutex_lock(&mtx);
>         p = rcu_dereference_protected(__rcu'd progs, lockdep_is_held(&mtx);
>         // ^^^ does rcu_dereference in the mutex protected section
>         bpf_prog_array_length(p);
>         bpf_prog_array_copy_to_user(p, ...);
>         bpf_prog_array_delete_safe(p);
>         bpf_prog_array_copy_info(p);
>         bpf_prog_array_copy(p, ...);
>         bpf_prog_array_free(p);

what about activate_effective_progs() ?
I wouldn't want to lose the annotation there.
but then array_free will lose it?
in some cases it's called without mutex in a destruction path.
also how do you propose to solve different 'mtx' in
lockdep_is_held(&mtx)); ?
passing it through the call chain is imo not clean.

I wonder what others think about this whole discussion.

>         // ^^^ all these helpers are consistent already with or
>         // without __rcu annotation because we hold a mutex and
>         // guarantee no concurrent updates, so __rcu annotations
>         // for their input arguments is not needed.
>         mutex_unlock(&mtx);

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-15  2:27                   ` Alexei Starovoitov
@ 2019-05-15  2:44                     ` Eric Dumazet
  2019-05-15  2:56                       ` Stanislav Fomichev
  0 siblings, 1 reply; 21+ messages in thread
From: Eric Dumazet @ 2019-05-15  2:44 UTC (permalink / raw)
  To: Alexei Starovoitov, Stanislav Fomichev
  Cc: Stanislav Fomichev, Network Development, bpf, David S. Miller,
	Alexei Starovoitov, Daniel Borkmann



On 5/14/19 7:27 PM, Alexei Starovoitov wrote:

> what about activate_effective_progs() ?
> I wouldn't want to lose the annotation there.
> but then array_free will lose it?
> in some cases it's called without mutex in a destruction path.
> also how do you propose to solve different 'mtx' in
> lockdep_is_held(&mtx)); ?
> passing it through the call chain is imo not clean.
> 

Usage of RCU api in BPF is indeed a bit strange and lacks lockdep support.

Looking at bpf_prog_array_copy_core() for example, it looks like the __rcu
in the first argument is not needed, since the caller must have done the proper dereference already,
and the caller knows which mutex is protecting its rcu_dereference_protected() for the writer sides.

bpf_prog_array_copy_core() should manipulate standard pointers, with no __rcu stuff.

The analogy in net/ are probably the rtnl_dereference() users.





^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-15  2:44                     ` Eric Dumazet
@ 2019-05-15  2:56                       ` Stanislav Fomichev
  2019-05-15  3:16                         ` Alexei Starovoitov
  0 siblings, 1 reply; 21+ messages in thread
From: Stanislav Fomichev @ 2019-05-15  2:56 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Alexei Starovoitov, Stanislav Fomichev, Network Development, bpf,
	David S. Miller, Alexei Starovoitov, Daniel Borkmann

On 05/14, Eric Dumazet wrote:
> 
> 
> On 5/14/19 7:27 PM, Alexei Starovoitov wrote:
> 
> > what about activate_effective_progs() ?
> > I wouldn't want to lose the annotation there.
> > but then array_free will lose it?
It would not have have it because the input is the result of
bpf_prog_array_alloc() which returns kmalloc'd pointer (and
is not bound to an rcu section).

> > in some cases it's called without mutex in a destruction path.
Hm, can you point me to this place? I think I checked every path,
maybe I missed something subtle. I'll double check.

> > also how do you propose to solve different 'mtx' in
> > lockdep_is_held(&mtx)); ?
> > passing it through the call chain is imo not clean.
Every caller would know which mutex protects it. As Eric said below,
I'm adding a bunch of xxx_dereference macros that hardcode mutex, like
the existing rtnl_dereference.

> Usage of RCU api in BPF is indeed a bit strange and lacks lockdep support.
> 
> Looking at bpf_prog_array_copy_core() for example, it looks like the __rcu
> in the first argument is not needed, since the caller must have done the proper dereference already,
> and the caller knows which mutex is protecting its rcu_dereference_protected() for the writer sides.
> 
> bpf_prog_array_copy_core() should manipulate standard pointers, with no __rcu stuff.
> 
> The analogy in net/ are probably the rtnl_dereference() users.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-15  2:56                       ` Stanislav Fomichev
@ 2019-05-15  3:16                         ` Alexei Starovoitov
  2019-05-15  3:38                           ` Stanislav Fomichev
  0 siblings, 1 reply; 21+ messages in thread
From: Alexei Starovoitov @ 2019-05-15  3:16 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Eric Dumazet, Stanislav Fomichev, Network Development, bpf,
	David S. Miller, Alexei Starovoitov, Daniel Borkmann

On Tue, May 14, 2019 at 07:56:36PM -0700, Stanislav Fomichev wrote:
> On 05/14, Eric Dumazet wrote:
> > 
> > 
> > On 5/14/19 7:27 PM, Alexei Starovoitov wrote:
> > 
> > > what about activate_effective_progs() ?
> > > I wouldn't want to lose the annotation there.
> > > but then array_free will lose it?
> It would not have have it because the input is the result of
> bpf_prog_array_alloc() which returns kmalloc'd pointer (and
> is not bound to an rcu section).
> 
> > > in some cases it's called without mutex in a destruction path.
> Hm, can you point me to this place? I think I checked every path,
> maybe I missed something subtle. I'll double check.

I thought cgroup dying thingy is not doing it, but looks like it is.

> > > also how do you propose to solve different 'mtx' in
> > > lockdep_is_held(&mtx)); ?
> > > passing it through the call chain is imo not clean.
> Every caller would know which mutex protects it. As Eric said below,
> I'm adding a bunch of xxx_dereference macros that hardcode mutex, like
> the existing rtnl_dereference.

I have a hard time imagining how it will look without being a mess.
There are three mutexes to pass down instead of single rtnl_derefernce:
cgroup_mutex, ir_raw_handler_lock, bpf_event_mutex.

Anyway, let's see how the patches look and discuss further.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-15  3:16                         ` Alexei Starovoitov
@ 2019-05-15  3:38                           ` Stanislav Fomichev
  2019-05-15  3:42                             ` Alexei Starovoitov
  0 siblings, 1 reply; 21+ messages in thread
From: Stanislav Fomichev @ 2019-05-15  3:38 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Stanislav Fomichev, Eric Dumazet, Network Development, bpf,
	David S. Miller, Alexei Starovoitov, Daniel Borkmann

On Tue, May 14, 2019 at 8:16 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, May 14, 2019 at 07:56:36PM -0700, Stanislav Fomichev wrote:
> > On 05/14, Eric Dumazet wrote:
> > >
> > >
> > > On 5/14/19 7:27 PM, Alexei Starovoitov wrote:
> > >
> > > > what about activate_effective_progs() ?
> > > > I wouldn't want to lose the annotation there.
> > > > but then array_free will lose it?
> > It would not have have it because the input is the result of
> > bpf_prog_array_alloc() which returns kmalloc'd pointer (and
> > is not bound to an rcu section).
> >
> > > > in some cases it's called without mutex in a destruction path.
> > Hm, can you point me to this place? I think I checked every path,
> > maybe I missed something subtle. I'll double check.
>
> I thought cgroup dying thingy is not doing it, but looks like it is.
I was looking at the following chain:
css_release_work_fn
  mutex_lock(&cgroup_mutex);
    cgroup_bpf_put
      bpf_prog_array_free
  mutex_unlock(&cgroup_mutex);

I'll take another look tomorrow with a fresh mind :-)

> > > > also how do you propose to solve different 'mtx' in
> > > > lockdep_is_held(&mtx)); ?
> > > > passing it through the call chain is imo not clean.
> > Every caller would know which mutex protects it. As Eric said below,
> > I'm adding a bunch of xxx_dereference macros that hardcode mutex, like
> > the existing rtnl_dereference.
>
> I have a hard time imagining how it will look without being a mess.
> There are three mutexes to pass down instead of single rtnl_derefernce:
> cgroup_mutex, ir_raw_handler_lock, bpf_event_mutex.
We don't need to pass them down, we need those xxx_dereference
wrappers only in the callers of those apis. They are private
to cgroup.c/lirc.c/bpf_trace.c.

Take a look at the patches 2-4 in the current series where I convert
the callers.

(Though, I'd rename xxx_dereference to xxx_rcu_dereference for clarity we
get to a v2).
>
> Anyway, let's see how the patches look and discuss further.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-15  3:38                           ` Stanislav Fomichev
@ 2019-05-15  3:42                             ` Alexei Starovoitov
  2019-05-15  3:49                               ` Stanislav Fomichev
  0 siblings, 1 reply; 21+ messages in thread
From: Alexei Starovoitov @ 2019-05-15  3:42 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Stanislav Fomichev, Eric Dumazet, Network Development, bpf,
	David S. Miller, Alexei Starovoitov, Daniel Borkmann

On Tue, May 14, 2019 at 8:38 PM Stanislav Fomichev <sdf@google.com> wrote:
>
> Take a look at the patches 2-4 in the current series where I convert
> the callers.
>
> (Though, I'd rename xxx_dereference to xxx_rcu_dereference for clarity we
> get to a v2).

please make a fresh repost _after_ bpf-next opens.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array
  2019-05-15  3:42                             ` Alexei Starovoitov
@ 2019-05-15  3:49                               ` Stanislav Fomichev
  0 siblings, 0 replies; 21+ messages in thread
From: Stanislav Fomichev @ 2019-05-15  3:49 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Stanislav Fomichev, Eric Dumazet, Network Development, bpf,
	David S. Miller, Alexei Starovoitov, Daniel Borkmann

On Tue, May 14, 2019 at 8:42 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, May 14, 2019 at 8:38 PM Stanislav Fomichev <sdf@google.com> wrote:
> >
> > Take a look at the patches 2-4 in the current series where I convert
> > the callers.
> >
> > (Though, I'd rename xxx_dereference to xxx_rcu_dereference for clarity we
> > get to a v2).
>
> please make a fresh repost _after_ bpf-next opens.
Sure, sgtm, thank you for a preliminary review and input!

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2019-05-15  3:49 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-08 17:18 [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array Stanislav Fomichev
2019-05-08 17:18 ` [PATCH bpf 1/4] " Stanislav Fomichev
2019-05-08 17:18 ` [PATCH bpf 2/4] bpf: media: properly use bpf_prog_array api Stanislav Fomichev
2019-05-08 17:18 ` [PATCH bpf 3/4] bpf: cgroup: " Stanislav Fomichev
2019-05-08 17:18 ` [PATCH bpf 4/4] bpf: tracing: " Stanislav Fomichev
2019-05-08 17:56 ` [PATCH bpf 0/4] bpf: remove __rcu annotations from bpf_prog_array Alexei Starovoitov
2019-05-08 18:12   ` Stanislav Fomichev
2019-05-13 18:57     ` Stanislav Fomichev
2019-05-14 16:55       ` Alexei Starovoitov
2019-05-14 17:30         ` Stanislav Fomichev
2019-05-14 17:45           ` Alexei Starovoitov
2019-05-14 17:53             ` Stanislav Fomichev
2019-05-15  1:25               ` Alexei Starovoitov
2019-05-15  2:11                 ` Stanislav Fomichev
2019-05-15  2:27                   ` Alexei Starovoitov
2019-05-15  2:44                     ` Eric Dumazet
2019-05-15  2:56                       ` Stanislav Fomichev
2019-05-15  3:16                         ` Alexei Starovoitov
2019-05-15  3:38                           ` Stanislav Fomichev
2019-05-15  3:42                             ` Alexei Starovoitov
2019-05-15  3:49                               ` Stanislav Fomichev

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).