bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v3 0/6] Sockmap iterator
@ 2020-09-04  9:58 Lorenz Bauer
  2020-09-04  9:58 ` [PATCH bpf-next v3 1/6] bpf: Allow passing BTF pointers as PTR_TO_SOCKET Lorenz Bauer
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Lorenz Bauer @ 2020-09-04  9:58 UTC (permalink / raw)
  To: ast, yhs, daniel, jakub, john.fastabend; +Cc: bpf, kernel-team, Lorenz Bauer

This addresses feedback of v2 [2] (see [1] for v1). From my POV the open
questions from v2 have been addressed.

I have a set of check_func_arg cleanups which I will post as a follow up
to this series.

Changes in v3:
- Use PTR_TO_BTF_ID in iterator context (Yonghong, Martin)
- Use rcu_dereference instead of rcu_dereference_raw (Jakub)
- Fix various test nits (Jakub, Andrii)

Changes in v2:
- Remove unnecessary sk_fullsock checks (Jakub)
- Nits for test output (Jakub)
- Increase number of sockets in tests to 64 (Jakub)
- Handle ENOENT in tests (Jakub)
- Actually test SOCKHASH iteration (myself)
- Fix SOCKHASH iterator initialization (myself)

1: https://lore.kernel.org/bpf/20200828094834.23290-1-lmb@cloudflare.com/
2: https://lore.kernel.org/bpf/20200901103210.54607-1-lmb@cloudflare.com/

Lorenz Bauer (6):
  bpf: Allow passing BTF pointers as PTR_TO_SOCKET
  net: sockmap: Remove unnecessary sk_fullsock checks
  net: Allow iterating sockmap and sockhash
  selftests: bpf: Ensure that BTF sockets cannot be released
  selftests: bpf: Add helper to compare socket cookies
  selftests: bpf: Test copying a sockmap via bpf_iter

 kernel/bpf/verifier.c                         |  61 ++--
 net/core/sock_map.c                           | 284 +++++++++++++++++-
 .../bpf/prog_tests/reference_tracking.c       |  20 +-
 .../selftests/bpf/prog_tests/sockmap_basic.c  | 138 ++++++++-
 tools/testing/selftests/bpf/progs/bpf_iter.h  |   9 +
 .../selftests/bpf/progs/bpf_iter_sockmap.c    |  57 ++++
 .../selftests/bpf/progs/bpf_iter_sockmap.h    |   3 +
 .../bpf/progs/test_sk_ref_track_invalid.c     |  20 ++
 8 files changed, 548 insertions(+), 44 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_iter_sockmap.c
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_iter_sockmap.h
 create mode 100644 tools/testing/selftests/bpf/progs/test_sk_ref_track_invalid.c

-- 
2.25.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v3 1/6] bpf: Allow passing BTF pointers as PTR_TO_SOCKET
  2020-09-04  9:58 [PATCH bpf-next v3 0/6] Sockmap iterator Lorenz Bauer
@ 2020-09-04  9:58 ` Lorenz Bauer
  2020-09-06 22:40   ` Martin KaFai Lau
  2020-09-04  9:59 ` [PATCH bpf-next v3 2/6] net: sockmap: Remove unnecessary sk_fullsock checks Lorenz Bauer
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Lorenz Bauer @ 2020-09-04  9:58 UTC (permalink / raw)
  To: ast, yhs, daniel, jakub, john.fastabend; +Cc: bpf, kernel-team, Lorenz Bauer

Tracing programs can derive struct sock pointers from a variety
of sources, e.g. a bpf_iter for sk_storage maps receives one as
part of the context. It's desirable to be able to pass these to
functions that expect PTR_TO_SOCKET. For example, it enables us
to insert such a socket into a sockmap via map_elem_update.

Teach the verifier that a PTR_TO_BTF_ID for a struct sock is
equivalent to PTR_TO_SOCKET. There is one hazard here:
bpf_sk_release also takes a PTR_TO_SOCKET, but expects it to be
refcounted. Since this isn't the case for pointers derived from
BTF we must prevent them from being passed to the function.
Luckily, we can simply check that the ref_obj_id is not zero
in release_reference, and return an error otherwise.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 kernel/bpf/verifier.c | 61 +++++++++++++++++++++++++------------------
 1 file changed, 36 insertions(+), 25 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b4e9c56b8b32..509754c3aa7d 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3908,6 +3908,9 @@ static int resolve_map_arg_type(struct bpf_verifier_env *env,
 	return 0;
 }
 
+BTF_ID_LIST(btf_fullsock_ids)
+BTF_ID(struct, sock)
+
 static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			  struct bpf_call_arg_meta *meta,
 			  const struct bpf_func_proto *fn)
@@ -4000,37 +4003,15 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 		expected_type = PTR_TO_SOCKET;
 		if (!(register_is_null(reg) &&
 		      arg_type == ARG_PTR_TO_SOCKET_OR_NULL)) {
-			if (type != expected_type)
+			if (type != expected_type &&
+			    type != PTR_TO_BTF_ID)
 				goto err_type;
 		}
+		meta->btf_id = btf_fullsock_ids[0];
 	} else if (arg_type == ARG_PTR_TO_BTF_ID) {
-		bool ids_match = false;
-
 		expected_type = PTR_TO_BTF_ID;
 		if (type != expected_type)
 			goto err_type;
-		if (!fn->check_btf_id) {
-			if (reg->btf_id != meta->btf_id) {
-				ids_match = btf_struct_ids_match(&env->log, reg->off, reg->btf_id,
-								 meta->btf_id);
-				if (!ids_match) {
-					verbose(env, "Helper has type %s got %s in R%d\n",
-						kernel_type_name(meta->btf_id),
-						kernel_type_name(reg->btf_id), regno);
-					return -EACCES;
-				}
-			}
-		} else if (!fn->check_btf_id(reg->btf_id, arg)) {
-			verbose(env, "Helper does not support %s in R%d\n",
-				kernel_type_name(reg->btf_id), regno);
-
-			return -EACCES;
-		}
-		if ((reg->off && !ids_match) || !tnum_is_const(reg->var_off) || reg->var_off.value) {
-			verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n",
-				regno);
-			return -EACCES;
-		}
 	} else if (arg_type == ARG_PTR_TO_SPIN_LOCK) {
 		if (meta->func_id == BPF_FUNC_spin_lock) {
 			if (process_spin_lock(env, regno, true))
@@ -4085,6 +4066,33 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 		return -EFAULT;
 	}
 
+	if (type == PTR_TO_BTF_ID) {
+		bool ids_match = false;
+
+		if (!fn->check_btf_id) {
+			if (reg->btf_id != meta->btf_id) {
+				ids_match = btf_struct_ids_match(&env->log, reg->off, reg->btf_id,
+								 meta->btf_id);
+				if (!ids_match) {
+					verbose(env, "Helper has type %s got %s in R%d\n",
+						kernel_type_name(meta->btf_id),
+						kernel_type_name(reg->btf_id), regno);
+					return -EACCES;
+				}
+			}
+		} else if (!fn->check_btf_id(reg->btf_id, arg)) {
+			verbose(env, "Helper does not support %s in R%d\n",
+				kernel_type_name(reg->btf_id), regno);
+
+			return -EACCES;
+		}
+		if ((reg->off && !ids_match) || !tnum_is_const(reg->var_off) || reg->var_off.value) {
+			verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n",
+				regno);
+			return -EACCES;
+		}
+	}
+
 	if (arg_type == ARG_CONST_MAP_PTR) {
 		/* bpf_map_xxx(map_ptr) call: remember that map_ptr */
 		meta->map_ptr = reg->map_ptr;
@@ -4561,6 +4569,9 @@ static int release_reference(struct bpf_verifier_env *env,
 	int err;
 	int i;
 
+	if (!ref_obj_id)
+		return -EINVAL;
+
 	err = release_reference_state(cur_func(env), ref_obj_id);
 	if (err)
 		return err;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v3 2/6] net: sockmap: Remove unnecessary sk_fullsock checks
  2020-09-04  9:58 [PATCH bpf-next v3 0/6] Sockmap iterator Lorenz Bauer
  2020-09-04  9:58 ` [PATCH bpf-next v3 1/6] bpf: Allow passing BTF pointers as PTR_TO_SOCKET Lorenz Bauer
@ 2020-09-04  9:59 ` Lorenz Bauer
  2020-09-04  9:59 ` [PATCH bpf-next v3 3/6] net: Allow iterating sockmap and sockhash Lorenz Bauer
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Lorenz Bauer @ 2020-09-04  9:59 UTC (permalink / raw)
  To: ast, yhs, daniel, jakub, john.fastabend; +Cc: bpf, kernel-team, Lorenz Bauer

The lookup paths for sockmap and sockhash currently include a check
that returns NULL if the socket we just found is not a full socket.
However, this check is not necessary. On insertion we ensure that
we have a full socket (caveat around sock_ops), so request sockets
are not a problem. Time-wait sockets are allocated separate from
the original socket and then fed into the hashdance. They don't
affect the sockets already stored in the sockmap.

Suggested-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 net/core/sock_map.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index 078386d7d9a2..82494810d0ee 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -382,7 +382,7 @@ static void *sock_map_lookup(struct bpf_map *map, void *key)
 	struct sock *sk;
 
 	sk = __sock_map_lookup_elem(map, *(u32 *)key);
-	if (!sk || !sk_fullsock(sk))
+	if (!sk)
 		return NULL;
 	if (sk_is_refcounted(sk) && !refcount_inc_not_zero(&sk->sk_refcnt))
 		return NULL;
@@ -1110,7 +1110,7 @@ static void *sock_hash_lookup(struct bpf_map *map, void *key)
 	struct sock *sk;
 
 	sk = __sock_hash_lookup_elem(map, key);
-	if (!sk || !sk_fullsock(sk))
+	if (!sk)
 		return NULL;
 	if (sk_is_refcounted(sk) && !refcount_inc_not_zero(&sk->sk_refcnt))
 		return NULL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v3 3/6] net: Allow iterating sockmap and sockhash
  2020-09-04  9:58 [PATCH bpf-next v3 0/6] Sockmap iterator Lorenz Bauer
  2020-09-04  9:58 ` [PATCH bpf-next v3 1/6] bpf: Allow passing BTF pointers as PTR_TO_SOCKET Lorenz Bauer
  2020-09-04  9:59 ` [PATCH bpf-next v3 2/6] net: sockmap: Remove unnecessary sk_fullsock checks Lorenz Bauer
@ 2020-09-04  9:59 ` Lorenz Bauer
  2020-09-04  9:59 ` [PATCH bpf-next v3 4/6] selftests: bpf: Ensure that BTF sockets cannot be released Lorenz Bauer
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Lorenz Bauer @ 2020-09-04  9:59 UTC (permalink / raw)
  To: ast, yhs, daniel, jakub, john.fastabend; +Cc: bpf, kernel-team, Lorenz Bauer

Add bpf_iter support for sockmap / sockhash, based on the bpf_sk_storage and
hashtable implementation. sockmap and sockhash share the same iteration
context: a pointer to an arbitrary key and a pointer to a socket. Both
pointers may be NULL, and so BPF has to perform a NULL check before accessing
them. Technically it's not possible for sockhash iteration to yield a NULL
socket, but we ignore this to be able to use a single iteration point.

Iteration will visit all keys that remain unmodified during the lifetime of
the iterator. It may or may not visit newly added ones.

Switch from using rcu_dereference_raw to plain rcu_dereference, so we gain
another guard rail if CONFIG_PROVE_RCU is enabled.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 net/core/sock_map.c | 280 +++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 278 insertions(+), 2 deletions(-)

diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index 82494810d0ee..e1f05e3fa1d0 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -2,6 +2,7 @@
 /* Copyright (c) 2017 - 2018 Covalent IO, Inc. http://covalent.io */
 
 #include <linux/bpf.h>
+#include <linux/btf_ids.h>
 #include <linux/filter.h>
 #include <linux/errno.h>
 #include <linux/file.h>
@@ -703,6 +704,109 @@ const struct bpf_func_proto bpf_msg_redirect_map_proto = {
 	.arg4_type      = ARG_ANYTHING,
 };
 
+struct sock_map_seq_info {
+	struct bpf_map *map;
+	struct sock *sk;
+	u32 index;
+};
+
+struct bpf_iter__sockmap {
+	__bpf_md_ptr(struct bpf_iter_meta *, meta);
+	__bpf_md_ptr(struct bpf_map *, map);
+	__bpf_md_ptr(void *, key);
+	__bpf_md_ptr(struct sock *, sk);
+};
+
+DEFINE_BPF_ITER_FUNC(sockmap, struct bpf_iter_meta *meta,
+		     struct bpf_map *map, void *key,
+		     struct sock *sk)
+
+static void *sock_map_seq_lookup_elem(struct sock_map_seq_info *info)
+{
+	if (unlikely(info->index >= info->map->max_entries))
+		return NULL;
+
+	info->sk = __sock_map_lookup_elem(info->map, info->index);
+
+	/* can't return sk directly, since that might be NULL */
+	return info;
+}
+
+static void *sock_map_seq_start(struct seq_file *seq, loff_t *pos)
+{
+	struct sock_map_seq_info *info = seq->private;
+
+	if (*pos == 0)
+		++*pos;
+
+	/* pairs with sock_map_seq_stop */
+	rcu_read_lock();
+	return sock_map_seq_lookup_elem(info);
+}
+
+static void *sock_map_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+	struct sock_map_seq_info *info = seq->private;
+
+	++*pos;
+	++info->index;
+
+	return sock_map_seq_lookup_elem(info);
+}
+
+static int sock_map_seq_show(struct seq_file *seq, void *v)
+{
+	struct sock_map_seq_info *info = seq->private;
+	struct bpf_iter__sockmap ctx = {};
+	struct bpf_iter_meta meta;
+	struct bpf_prog *prog;
+
+	meta.seq = seq;
+	prog = bpf_iter_get_info(&meta, !v);
+	if (!prog)
+		return 0;
+
+	ctx.meta = &meta;
+	ctx.map = info->map;
+	if (v) {
+		ctx.key = &info->index;
+		ctx.sk = info->sk;
+	}
+
+	return bpf_iter_run_prog(prog, &ctx);
+}
+
+static void sock_map_seq_stop(struct seq_file *seq, void *v)
+{
+	if (!v)
+		(void)sock_map_seq_show(seq, NULL);
+
+	/* pairs with sock_map_seq_start */
+	rcu_read_unlock();
+}
+
+static const struct seq_operations sock_map_seq_ops = {
+	.start	= sock_map_seq_start,
+	.next	= sock_map_seq_next,
+	.stop	= sock_map_seq_stop,
+	.show	= sock_map_seq_show,
+};
+
+static int sock_map_init_seq_private(void *priv_data,
+				     struct bpf_iter_aux_info *aux)
+{
+	struct sock_map_seq_info *info = priv_data;
+
+	info->map = aux->map;
+	return 0;
+}
+
+static const struct bpf_iter_seq_info sock_map_iter_seq_info = {
+	.seq_ops		= &sock_map_seq_ops,
+	.init_seq_private	= sock_map_init_seq_private,
+	.seq_priv_size		= sizeof(struct sock_map_seq_info),
+};
+
 static int sock_map_btf_id;
 const struct bpf_map_ops sock_map_ops = {
 	.map_meta_equal		= bpf_map_meta_equal,
@@ -717,6 +821,7 @@ const struct bpf_map_ops sock_map_ops = {
 	.map_check_btf		= map_check_no_btf,
 	.map_btf_name		= "bpf_stab",
 	.map_btf_id		= &sock_map_btf_id,
+	.iter_seq_info		= &sock_map_iter_seq_info,
 };
 
 struct bpf_shtab_elem {
@@ -953,7 +1058,7 @@ static int sock_hash_get_next_key(struct bpf_map *map, void *key,
 	if (!elem)
 		goto find_first_elem;
 
-	elem_next = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu(&elem->node)),
+	elem_next = hlist_entry_safe(rcu_dereference(hlist_next_rcu(&elem->node)),
 				     struct bpf_shtab_elem, node);
 	if (elem_next) {
 		memcpy(key_next, elem_next->key, key_size);
@@ -965,7 +1070,7 @@ static int sock_hash_get_next_key(struct bpf_map *map, void *key,
 find_first_elem:
 	for (; i < htab->buckets_num; i++) {
 		head = &sock_hash_select_bucket(htab, i)->head;
-		elem_next = hlist_entry_safe(rcu_dereference_raw(hlist_first_rcu(head)),
+		elem_next = hlist_entry_safe(rcu_dereference(hlist_first_rcu(head)),
 					     struct bpf_shtab_elem, node);
 		if (elem_next) {
 			memcpy(key_next, elem_next->key, key_size);
@@ -1199,6 +1304,117 @@ const struct bpf_func_proto bpf_msg_redirect_hash_proto = {
 	.arg4_type      = ARG_ANYTHING,
 };
 
+struct sock_hash_seq_info {
+	struct bpf_map *map;
+	struct bpf_shtab *htab;
+	u32 bucket_id;
+};
+
+static void *sock_hash_seq_find_next(struct sock_hash_seq_info *info,
+				     struct bpf_shtab_elem *prev_elem)
+{
+	const struct bpf_shtab *htab = info->htab;
+	struct bpf_shtab_bucket *bucket;
+	struct bpf_shtab_elem *elem;
+	struct hlist_node *node;
+
+	/* try to find next elem in the same bucket */
+	if (prev_elem) {
+		node = rcu_dereference(hlist_next_rcu(&prev_elem->node));
+		elem = hlist_entry_safe(node, struct bpf_shtab_elem, node);
+		if (elem)
+			return elem;
+
+		/* no more elements, continue in the next bucket */
+		info->bucket_id++;
+	}
+
+	for (; info->bucket_id < htab->buckets_num; info->bucket_id++) {
+		bucket = &htab->buckets[info->bucket_id];
+		node = rcu_dereference(hlist_first_rcu(&bucket->head));
+		elem = hlist_entry_safe(node, struct bpf_shtab_elem, node);
+		if (elem)
+			return elem;
+	}
+
+	return NULL;
+}
+
+static void *sock_hash_seq_start(struct seq_file *seq, loff_t *pos)
+{
+	struct sock_hash_seq_info *info = seq->private;
+
+	if (*pos == 0)
+		++*pos;
+
+	/* pairs with sock_hash_seq_stop */
+	rcu_read_lock();
+	return sock_hash_seq_find_next(info, NULL);
+}
+
+static void *sock_hash_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+	struct sock_hash_seq_info *info = seq->private;
+
+	++*pos;
+	return sock_hash_seq_find_next(info, v);
+}
+
+static int sock_hash_seq_show(struct seq_file *seq, void *v)
+{
+	struct sock_hash_seq_info *info = seq->private;
+	struct bpf_iter__sockmap ctx = {};
+	struct bpf_shtab_elem *elem = v;
+	struct bpf_iter_meta meta;
+	struct bpf_prog *prog;
+
+	meta.seq = seq;
+	prog = bpf_iter_get_info(&meta, !elem);
+	if (!prog)
+		return 0;
+
+	ctx.meta = &meta;
+	ctx.map = info->map;
+	if (elem) {
+		ctx.key = elem->key;
+		ctx.sk = elem->sk;
+	}
+
+	return bpf_iter_run_prog(prog, &ctx);
+}
+
+static void sock_hash_seq_stop(struct seq_file *seq, void *v)
+{
+	if (!v)
+		(void)sock_hash_seq_show(seq, NULL);
+
+	/* pairs with sock_hash_seq_start */
+	rcu_read_unlock();
+}
+
+static const struct seq_operations sock_hash_seq_ops = {
+	.start	= sock_hash_seq_start,
+	.next	= sock_hash_seq_next,
+	.stop	= sock_hash_seq_stop,
+	.show	= sock_hash_seq_show,
+};
+
+static int sock_hash_init_seq_private(void *priv_data,
+				     struct bpf_iter_aux_info *aux)
+{
+	struct sock_hash_seq_info *info = priv_data;
+
+	info->map = aux->map;
+	info->htab = container_of(aux->map, struct bpf_shtab, map);
+	return 0;
+}
+
+static const struct bpf_iter_seq_info sock_hash_iter_seq_info = {
+	.seq_ops		= &sock_hash_seq_ops,
+	.init_seq_private	= sock_hash_init_seq_private,
+	.seq_priv_size		= sizeof(struct sock_hash_seq_info),
+};
+
 static int sock_hash_map_btf_id;
 const struct bpf_map_ops sock_hash_ops = {
 	.map_meta_equal		= bpf_map_meta_equal,
@@ -1213,6 +1429,7 @@ const struct bpf_map_ops sock_hash_ops = {
 	.map_check_btf		= map_check_no_btf,
 	.map_btf_name		= "bpf_shtab",
 	.map_btf_id		= &sock_hash_map_btf_id,
+	.iter_seq_info		= &sock_hash_iter_seq_info,
 };
 
 static struct sk_psock_progs *sock_map_progs(struct bpf_map *map)
@@ -1323,3 +1540,62 @@ void sock_map_close(struct sock *sk, long timeout)
 	release_sock(sk);
 	saved_close(sk, timeout);
 }
+
+static int sock_map_iter_attach_target(struct bpf_prog *prog,
+				       union bpf_iter_link_info *linfo,
+				       struct bpf_iter_aux_info *aux)
+{
+	struct bpf_map *map;
+	int err = -EINVAL;
+
+	if (!linfo->map.map_fd)
+		return -EBADF;
+
+	map = bpf_map_get_with_uref(linfo->map.map_fd);
+	if (IS_ERR(map))
+		return PTR_ERR(map);
+
+	if (map->map_type != BPF_MAP_TYPE_SOCKMAP &&
+	    map->map_type != BPF_MAP_TYPE_SOCKHASH)
+		goto put_map;
+
+	if (prog->aux->max_rdonly_access > map->key_size) {
+		err = -EACCES;
+		goto put_map;
+	}
+
+	aux->map = map;
+	return 0;
+
+put_map:
+	bpf_map_put_with_uref(map);
+	return err;
+}
+
+static void sock_map_iter_detach_target(struct bpf_iter_aux_info *aux)
+{
+	bpf_map_put_with_uref(aux->map);
+}
+
+static struct bpf_iter_reg sock_map_iter_reg = {
+	.target			= "sockmap",
+	.attach_target		= sock_map_iter_attach_target,
+	.detach_target		= sock_map_iter_detach_target,
+	.show_fdinfo		= bpf_iter_map_show_fdinfo,
+	.fill_link_info		= bpf_iter_map_fill_link_info,
+	.ctx_arg_info_size	= 2,
+	.ctx_arg_info		= {
+		{ offsetof(struct bpf_iter__sockmap, key),
+		  PTR_TO_RDONLY_BUF_OR_NULL },
+		{ offsetof(struct bpf_iter__sockmap, sk),
+		  PTR_TO_BTF_ID_OR_NULL },
+	},
+};
+
+static int __init bpf_sockmap_iter_init(void)
+{
+	sock_map_iter_reg.ctx_arg_info[1].btf_id =
+		btf_sock_ids[BTF_SOCK_TYPE_SOCK];
+	return bpf_iter_reg_target(&sock_map_iter_reg);
+}
+late_initcall(bpf_sockmap_iter_init);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v3 4/6] selftests: bpf: Ensure that BTF sockets cannot be released
  2020-09-04  9:58 [PATCH bpf-next v3 0/6] Sockmap iterator Lorenz Bauer
                   ` (2 preceding siblings ...)
  2020-09-04  9:59 ` [PATCH bpf-next v3 3/6] net: Allow iterating sockmap and sockhash Lorenz Bauer
@ 2020-09-04  9:59 ` Lorenz Bauer
  2020-09-04  9:59 ` [PATCH bpf-next v3 5/6] selftests: bpf: Add helper to compare socket cookies Lorenz Bauer
  2020-09-04  9:59 ` [PATCH bpf-next v3 6/6] selftests: bpf: Test copying a sockmap via bpf_iter Lorenz Bauer
  5 siblings, 0 replies; 13+ messages in thread
From: Lorenz Bauer @ 2020-09-04  9:59 UTC (permalink / raw)
  To: ast, yhs, daniel, jakub, john.fastabend; +Cc: bpf, kernel-team, Lorenz Bauer

Being able to pass a BTF struct sock* to bpf_sk_release would screw up
reference counting, and must therefore be prevented. Add a test which
ensures that this property holds.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 .../bpf/prog_tests/reference_tracking.c       | 20 ++++++++++++++++++-
 .../bpf/progs/test_sk_ref_track_invalid.c     | 20 +++++++++++++++++++
 2 files changed, 39 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/progs/test_sk_ref_track_invalid.c

diff --git a/tools/testing/selftests/bpf/prog_tests/reference_tracking.c b/tools/testing/selftests/bpf/prog_tests/reference_tracking.c
index fc0d7f4f02cf..3147655608ab 100644
--- a/tools/testing/selftests/bpf/prog_tests/reference_tracking.c
+++ b/tools/testing/selftests/bpf/prog_tests/reference_tracking.c
@@ -1,7 +1,8 @@
 // SPDX-License-Identifier: GPL-2.0
 #include <test_progs.h>
+#include "test_sk_ref_track_invalid.skel.h"
 
-void test_reference_tracking(void)
+static void test_sk_lookup(void)
 {
 	const char *file = "test_sk_lookup_kern.o";
 	const char *obj_name = "ref_track";
@@ -50,3 +51,20 @@ void test_reference_tracking(void)
 cleanup:
 	bpf_object__close(obj);
 }
+
+static void test_sk_release_invalid(void)
+{
+	struct test_sk_ref_track_invalid *skel;
+	int duration = 0;
+
+	skel = test_sk_ref_track_invalid__open_and_load();
+	if (CHECK(skel, "open_and_load", "verifier accepted sk_release of BTF struct sock*\n"))
+		test_sk_ref_track_invalid__destroy(skel);
+}
+
+void test_reference_tracking(void)
+{
+	test_sk_lookup();
+	if (test__start_subtest("invalid sk_release"))
+		test_sk_release_invalid();
+}
diff --git a/tools/testing/selftests/bpf/progs/test_sk_ref_track_invalid.c b/tools/testing/selftests/bpf/progs/test_sk_ref_track_invalid.c
new file mode 100644
index 000000000000..9017d92a807b
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_sk_ref_track_invalid.c
@@ -0,0 +1,20 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2020 Cloudflare
+
+#include "bpf_iter.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_endian.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+SEC("iter/bpf_sk_storage_map")
+int dump_bpf_sk_storage_map(struct bpf_iter__bpf_sk_storage_map *ctx)
+{
+	struct sock *sk = ctx->sk;
+
+	if (sk)
+		bpf_sk_release((struct bpf_sock *)sk);
+
+	return 0;
+}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v3 5/6] selftests: bpf: Add helper to compare socket cookies
  2020-09-04  9:58 [PATCH bpf-next v3 0/6] Sockmap iterator Lorenz Bauer
                   ` (3 preceding siblings ...)
  2020-09-04  9:59 ` [PATCH bpf-next v3 4/6] selftests: bpf: Ensure that BTF sockets cannot be released Lorenz Bauer
@ 2020-09-04  9:59 ` Lorenz Bauer
  2020-09-04  9:59 ` [PATCH bpf-next v3 6/6] selftests: bpf: Test copying a sockmap via bpf_iter Lorenz Bauer
  5 siblings, 0 replies; 13+ messages in thread
From: Lorenz Bauer @ 2020-09-04  9:59 UTC (permalink / raw)
  To: ast, yhs, daniel, jakub, john.fastabend; +Cc: bpf, kernel-team, Lorenz Bauer

We compare socket cookies to ensure that insertion into a sockmap worked.
Pull this out into a helper function for use in other tests.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 .../selftests/bpf/prog_tests/sockmap_basic.c  | 50 +++++++++++++------
 1 file changed, 36 insertions(+), 14 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
index 0b79d78b98db..0fe2a737fc8e 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
@@ -47,6 +47,37 @@ static int connected_socket_v4(void)
 	return -1;
 }
 
+static void compare_cookies(struct bpf_map *src, struct bpf_map *dst)
+{
+	__u32 i, max_entries = bpf_map__max_entries(src);
+	int err, duration, src_fd, dst_fd;
+
+	src_fd = bpf_map__fd(src);
+	dst_fd = bpf_map__fd(dst);
+
+	for (i = 0; i < max_entries; i++) {
+		__u64 src_cookie, dst_cookie;
+
+		err = bpf_map_lookup_elem(src_fd, &i, &src_cookie);
+		if (err && errno == ENOENT) {
+			err = bpf_map_lookup_elem(dst_fd, &i, &dst_cookie);
+			CHECK(!err, "map_lookup_elem(dst)", "element %u not deleted\n", i);
+			CHECK(err && errno != ENOENT, "map_lookup_elem(dst)", "%s\n",
+			      strerror(errno));
+			continue;
+		}
+		if (CHECK(err, "lookup_elem(src)", "%s\n", strerror(errno)))
+			continue;
+
+		err = bpf_map_lookup_elem(dst_fd, &i, &dst_cookie);
+		if (CHECK(err, "lookup_elem(dst)", "%s\n", strerror(errno)))
+			continue;
+
+		CHECK(dst_cookie != src_cookie, "cookie mismatch",
+		      "%llu != %llu (pos %u)\n", dst_cookie, src_cookie, i);
+	}
+}
+
 /* Create a map, populate it with one socket, and free the map. */
 static void test_sockmap_create_update_free(enum bpf_map_type map_type)
 {
@@ -106,9 +137,9 @@ static void test_skmsg_helpers(enum bpf_map_type map_type)
 static void test_sockmap_update(enum bpf_map_type map_type)
 {
 	struct bpf_prog_test_run_attr tattr;
-	int err, prog, src, dst, duration = 0;
+	int err, prog, src, duration = 0;
 	struct test_sockmap_update *skel;
-	__u64 src_cookie, dst_cookie;
+	struct bpf_map *dst_map;
 	const __u32 zero = 0;
 	char dummy[14] = {0};
 	__s64 sk;
@@ -124,18 +155,14 @@ static void test_sockmap_update(enum bpf_map_type map_type)
 	prog = bpf_program__fd(skel->progs.copy_sock_map);
 	src = bpf_map__fd(skel->maps.src);
 	if (map_type == BPF_MAP_TYPE_SOCKMAP)
-		dst = bpf_map__fd(skel->maps.dst_sock_map);
+		dst_map = skel->maps.dst_sock_map;
 	else
-		dst = bpf_map__fd(skel->maps.dst_sock_hash);
+		dst_map = skel->maps.dst_sock_hash;
 
 	err = bpf_map_update_elem(src, &zero, &sk, BPF_NOEXIST);
 	if (CHECK(err, "update_elem(src)", "errno=%u\n", errno))
 		goto out;
 
-	err = bpf_map_lookup_elem(src, &zero, &src_cookie);
-	if (CHECK(err, "lookup_elem(src, cookie)", "errno=%u\n", errno))
-		goto out;
-
 	tattr = (struct bpf_prog_test_run_attr){
 		.prog_fd = prog,
 		.repeat = 1,
@@ -148,12 +175,7 @@ static void test_sockmap_update(enum bpf_map_type map_type)
 		       "errno=%u retval=%u\n", errno, tattr.retval))
 		goto out;
 
-	err = bpf_map_lookup_elem(dst, &zero, &dst_cookie);
-	if (CHECK(err, "lookup_elem(dst, cookie)", "errno=%u\n", errno))
-		goto out;
-
-	CHECK(dst_cookie != src_cookie, "cookie mismatch", "%llu != %llu\n",
-	      dst_cookie, src_cookie);
+	compare_cookies(skel->maps.src, dst_map);
 
 out:
 	test_sockmap_update__destroy(skel);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v3 6/6] selftests: bpf: Test copying a sockmap via bpf_iter
  2020-09-04  9:58 [PATCH bpf-next v3 0/6] Sockmap iterator Lorenz Bauer
                   ` (4 preceding siblings ...)
  2020-09-04  9:59 ` [PATCH bpf-next v3 5/6] selftests: bpf: Add helper to compare socket cookies Lorenz Bauer
@ 2020-09-04  9:59 ` Lorenz Bauer
  5 siblings, 0 replies; 13+ messages in thread
From: Lorenz Bauer @ 2020-09-04  9:59 UTC (permalink / raw)
  To: ast, yhs, daniel, jakub, john.fastabend; +Cc: bpf, kernel-team, Lorenz Bauer

Add a test that exercises a basic sockmap / sockhash copy using bpf_iter.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 .../selftests/bpf/prog_tests/sockmap_basic.c  | 88 +++++++++++++++++++
 tools/testing/selftests/bpf/progs/bpf_iter.h  |  9 ++
 .../selftests/bpf/progs/bpf_iter_sockmap.c    | 57 ++++++++++++
 .../selftests/bpf/progs/bpf_iter_sockmap.h    |  3 +
 4 files changed, 157 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_iter_sockmap.c
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_iter_sockmap.h

diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
index 0fe2a737fc8e..088903bdf10c 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
@@ -6,6 +6,9 @@
 #include "test_skmsg_load_helpers.skel.h"
 #include "test_sockmap_update.skel.h"
 #include "test_sockmap_invalid_update.skel.h"
+#include "bpf_iter_sockmap.skel.h"
+
+#include "progs/bpf_iter_sockmap.h"
 
 #define TCP_REPAIR		19	/* TCP sock is under repair right now */
 
@@ -193,6 +196,87 @@ static void test_sockmap_invalid_update(void)
 		test_sockmap_invalid_update__destroy(skel);
 }
 
+static void test_sockmap_copy(enum bpf_map_type map_type)
+{
+	DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts);
+	int err, len, src_fd, iter_fd, duration;
+	union bpf_iter_link_info linfo = {0};
+	__s64 sock_fd[SOCKMAP_MAX_ENTRIES];
+	__u32 i, num_sockets, max_elems;
+	struct bpf_iter_sockmap *skel;
+	struct bpf_map *src, *dst;
+	struct bpf_link *link;
+	char buf[64];
+
+	skel = bpf_iter_sockmap__open_and_load();
+	if (CHECK(!skel, "bpf_iter_sockmap__open_and_load", "skeleton open_and_load failed\n"))
+		return;
+
+	for (i = 0; i < ARRAY_SIZE(sock_fd); i++)
+		sock_fd[i] = -1;
+
+	/* Make sure we have at least one "empty" entry to test iteration of
+	 * an empty slot in an array.
+	 */
+	num_sockets = ARRAY_SIZE(sock_fd) - 1;
+
+	if (map_type == BPF_MAP_TYPE_SOCKMAP) {
+		src = skel->maps.sockmap;
+		max_elems = bpf_map__max_entries(src);
+	} else {
+		src = skel->maps.sockhash;
+		max_elems = num_sockets;
+	}
+
+	dst = skel->maps.dst;
+	src_fd = bpf_map__fd(src);
+
+	for (i = 0; i < num_sockets; i++) {
+		sock_fd[i] = connected_socket_v4();
+		if (CHECK(sock_fd[i] == -1, "connected_socket_v4", "cannot connect\n"))
+			goto out;
+
+		err = bpf_map_update_elem(src_fd, &i, &sock_fd[i], BPF_NOEXIST);
+		if (CHECK(err, "map_update", "failed: %s\n", strerror(errno)))
+			goto out;
+	}
+
+	linfo.map.map_fd = src_fd;
+	opts.link_info = &linfo;
+	opts.link_info_len = sizeof(linfo);
+	link = bpf_program__attach_iter(skel->progs.copy_sockmap, &opts);
+	if (CHECK(IS_ERR(link), "attach_iter", "attach_iter failed\n"))
+		goto out;
+
+	iter_fd = bpf_iter_create(bpf_link__fd(link));
+	if (CHECK(iter_fd < 0, "create_iter", "create_iter failed\n"))
+		goto free_link;
+
+	/* do some tests */
+	while ((len = read(iter_fd, buf, sizeof(buf))) > 0)
+		;
+	if (CHECK(len < 0, "read", "failed: %s\n", strerror(errno)))
+		goto close_iter;
+
+	/* test results */
+	if (CHECK(skel->bss->elems != max_elems, "elems", "got %u expected %u\n",
+		  skel->bss->elems, max_elems))
+		goto close_iter;
+
+	compare_cookies(src, dst);
+
+close_iter:
+	close(iter_fd);
+free_link:
+	bpf_link__destroy(link);
+out:
+	for (i = 0; i < num_sockets; i++) {
+		if (sock_fd[i] >= 0)
+			close(sock_fd[i]);
+	}
+	bpf_iter_sockmap__destroy(skel);
+}
+
 void test_sockmap_basic(void)
 {
 	if (test__start_subtest("sockmap create_update_free"))
@@ -209,4 +293,8 @@ void test_sockmap_basic(void)
 		test_sockmap_update(BPF_MAP_TYPE_SOCKHASH);
 	if (test__start_subtest("sockmap update in unsafe context"))
 		test_sockmap_invalid_update();
+	if (test__start_subtest("sockmap copy"))
+		test_sockmap_copy(BPF_MAP_TYPE_SOCKMAP);
+	if (test__start_subtest("sockhash copy"))
+		test_sockmap_copy(BPF_MAP_TYPE_SOCKHASH);
 }
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter.h b/tools/testing/selftests/bpf/progs/bpf_iter.h
index c196280df90d..df682af75510 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter.h
+++ b/tools/testing/selftests/bpf/progs/bpf_iter.h
@@ -13,6 +13,7 @@
 #define udp6_sock udp6_sock___not_used
 #define bpf_iter__bpf_map_elem bpf_iter__bpf_map_elem___not_used
 #define bpf_iter__bpf_sk_storage_map bpf_iter__bpf_sk_storage_map___not_used
+#define bpf_iter__sockmap bpf_iter__sockmap___not_used
 #include "vmlinux.h"
 #undef bpf_iter_meta
 #undef bpf_iter__bpf_map
@@ -26,6 +27,7 @@
 #undef udp6_sock
 #undef bpf_iter__bpf_map_elem
 #undef bpf_iter__bpf_sk_storage_map
+#undef bpf_iter__sockmap
 
 struct bpf_iter_meta {
 	struct seq_file *seq;
@@ -96,3 +98,10 @@ struct bpf_iter__bpf_sk_storage_map {
 	struct sock *sk;
 	void *value;
 };
+
+struct bpf_iter__sockmap {
+	struct bpf_iter_meta *meta;
+	struct bpf_map *map;
+	void *key;
+	struct sock *sk;
+};
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_sockmap.c b/tools/testing/selftests/bpf/progs/bpf_iter_sockmap.c
new file mode 100644
index 000000000000..d8d107c80b7b
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_sockmap.c
@@ -0,0 +1,57 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Cloudflare */
+#include "bpf_iter.h"
+#include "bpf_tracing_net.h"
+#include "bpf_iter_sockmap.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <errno.h>
+
+char _license[] SEC("license") = "GPL";
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKMAP);
+	__uint(max_entries, SOCKMAP_MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u64);
+} sockmap SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKHASH);
+	__uint(max_entries, SOCKMAP_MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u64);
+} sockhash SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKHASH);
+	__uint(max_entries, SOCKMAP_MAX_ENTRIES);
+	__type(key, __u32);
+	__type(value, __u64);
+} dst SEC(".maps");
+
+__u32 elems = 0;
+
+SEC("iter/sockmap")
+int copy_sockmap(struct bpf_iter__sockmap *ctx)
+{
+	struct sock *sk = ctx->sk;
+	__u32 tmp, *key = ctx->key;
+	int ret;
+
+	if (!key)
+		return 0;
+
+	elems++;
+
+	/* We need a temporary buffer on the stack, since the verifier doesn't
+	 * let us use the pointer from the context as an argument to the helper.
+	 */
+	tmp = *key;
+
+	if (sk)
+		return bpf_map_update_elem(&dst, &tmp, sk, 0) != 0;
+
+	ret = bpf_map_delete_elem(&dst, &tmp);
+	return ret && ret != -ENOENT;
+}
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_sockmap.h b/tools/testing/selftests/bpf/progs/bpf_iter_sockmap.h
new file mode 100644
index 000000000000..f98ad727ac06
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_sockmap.h
@@ -0,0 +1,3 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#define SOCKMAP_MAX_ENTRIES (64)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v3 1/6] bpf: Allow passing BTF pointers as PTR_TO_SOCKET
  2020-09-04  9:58 ` [PATCH bpf-next v3 1/6] bpf: Allow passing BTF pointers as PTR_TO_SOCKET Lorenz Bauer
@ 2020-09-06 22:40   ` Martin KaFai Lau
  2020-09-07  8:57     ` Lorenz Bauer
  0 siblings, 1 reply; 13+ messages in thread
From: Martin KaFai Lau @ 2020-09-06 22:40 UTC (permalink / raw)
  To: Lorenz Bauer; +Cc: ast, yhs, daniel, jakub, john.fastabend, bpf, kernel-team

On Fri, Sep 04, 2020 at 10:58:59AM +0100, Lorenz Bauer wrote:
> Tracing programs can derive struct sock pointers from a variety
> of sources, e.g. a bpf_iter for sk_storage maps receives one as
> part of the context. It's desirable to be able to pass these to
> functions that expect PTR_TO_SOCKET. For example, it enables us
> to insert such a socket into a sockmap via map_elem_update.
> 
> Teach the verifier that a PTR_TO_BTF_ID for a struct sock is
> equivalent to PTR_TO_SOCKET. There is one hazard here:
> bpf_sk_release also takes a PTR_TO_SOCKET, but expects it to be
> refcounted. Since this isn't the case for pointers derived from
> BTF we must prevent them from being passed to the function.
> Luckily, we can simply check that the ref_obj_id is not zero
> in release_reference, and return an error otherwise.
> 
> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> ---
>  kernel/bpf/verifier.c | 61 +++++++++++++++++++++++++------------------
>  1 file changed, 36 insertions(+), 25 deletions(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index b4e9c56b8b32..509754c3aa7d 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -3908,6 +3908,9 @@ static int resolve_map_arg_type(struct bpf_verifier_env *env,
>  	return 0;
>  }
>  
> +BTF_ID_LIST(btf_fullsock_ids)
> +BTF_ID(struct, sock)
It may be fine for the sockmap iter case to treat the "struct sock" BTF_ID
as a fullsock (i.e. PTR_TO_SOCKET).

This is a generic verifier change though.  For tracing, it is not always the
case.  It cannot always assume that the "struct sock *" in the function being
traced is always a fullsock.

Currently, the func_proto taking ARG_PTR_TO_SOCKET can safely
assume it must be a fullsock.  If it is allowed to also take BTF_ID
"struct sock" in verification time,  I think the sk_fullsock()
check in runtime is needed and this check should be pretty
cheap to do.

> +
>  static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>  			  struct bpf_call_arg_meta *meta,
>  			  const struct bpf_func_proto *fn)
> @@ -4000,37 +4003,15 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>  		expected_type = PTR_TO_SOCKET;
>  		if (!(register_is_null(reg) &&
>  		      arg_type == ARG_PTR_TO_SOCKET_OR_NULL)) {
> -			if (type != expected_type)
> +			if (type != expected_type &&
> +			    type != PTR_TO_BTF_ID)
>  				goto err_type;
>  		}
> +		meta->btf_id = btf_fullsock_ids[0];
>  	} else if (arg_type == ARG_PTR_TO_BTF_ID) {
> -		bool ids_match = false;
> -
>  		expected_type = PTR_TO_BTF_ID;
>  		if (type != expected_type)
>  			goto err_type;
> -		if (!fn->check_btf_id) {
> -			if (reg->btf_id != meta->btf_id) {
> -				ids_match = btf_struct_ids_match(&env->log, reg->off, reg->btf_id,
> -								 meta->btf_id);
> -				if (!ids_match) {
> -					verbose(env, "Helper has type %s got %s in R%d\n",
> -						kernel_type_name(meta->btf_id),
> -						kernel_type_name(reg->btf_id), regno);
> -					return -EACCES;
> -				}
> -			}
> -		} else if (!fn->check_btf_id(reg->btf_id, arg)) {
> -			verbose(env, "Helper does not support %s in R%d\n",
> -				kernel_type_name(reg->btf_id), regno);
> -
> -			return -EACCES;
> -		}
> -		if ((reg->off && !ids_match) || !tnum_is_const(reg->var_off) || reg->var_off.value) {
> -			verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n",
> -				regno);
> -			return -EACCES;
> -		}
>  	} else if (arg_type == ARG_PTR_TO_SPIN_LOCK) {
>  		if (meta->func_id == BPF_FUNC_spin_lock) {
>  			if (process_spin_lock(env, regno, true))
> @@ -4085,6 +4066,33 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>  		return -EFAULT;
>  	}
>  
> +	if (type == PTR_TO_BTF_ID) {
> +		bool ids_match = false;
> +
> +		if (!fn->check_btf_id) {
> +			if (reg->btf_id != meta->btf_id) {
> +				ids_match = btf_struct_ids_match(&env->log, reg->off, reg->btf_id,
> +								 meta->btf_id);
> +				if (!ids_match) {
> +					verbose(env, "Helper has type %s got %s in R%d\n",
> +						kernel_type_name(meta->btf_id),
> +						kernel_type_name(reg->btf_id), regno);
> +					return -EACCES;
> +				}
> +			}
> +		} else if (!fn->check_btf_id(reg->btf_id, arg)) {
> +			verbose(env, "Helper does not support %s in R%d\n",
> +				kernel_type_name(reg->btf_id), regno);
> +
> +			return -EACCES;
> +		}
> +		if ((reg->off && !ids_match) || !tnum_is_const(reg->var_off) || reg->var_off.value) {
> +			verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n",
> +				regno);
> +			return -EACCES;
> +		}
> +	}
> +
>  	if (arg_type == ARG_CONST_MAP_PTR) {
>  		/* bpf_map_xxx(map_ptr) call: remember that map_ptr */
>  		meta->map_ptr = reg->map_ptr;
> @@ -4561,6 +4569,9 @@ static int release_reference(struct bpf_verifier_env *env,
>  	int err;
>  	int i;
>  
> +	if (!ref_obj_id)
> +		return -EINVAL;
hmm...... Is it sure this is needed?  If it was, it seems there was
an existing bug in release_reference_state() below which could not catch
the case where "bpf_sk_release()" is called on a pointer that has no
reference acquired before.

Can you write a verifier test to demonstrate the issue?

> +
>  	err = release_reference_state(cur_func(env), ref_obj_id);
>  	if (err)
>  		return err;
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v3 1/6] bpf: Allow passing BTF pointers as PTR_TO_SOCKET
  2020-09-06 22:40   ` Martin KaFai Lau
@ 2020-09-07  8:57     ` Lorenz Bauer
  2020-09-08 19:52       ` Martin KaFai Lau
  0 siblings, 1 reply; 13+ messages in thread
From: Lorenz Bauer @ 2020-09-07  8:57 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Jakub Sitnicki, John Fastabend, bpf, kernel-team

On Sun, 6 Sep 2020 at 23:40, Martin KaFai Lau <kafai@fb.com> wrote:
>
> On Fri, Sep 04, 2020 at 10:58:59AM +0100, Lorenz Bauer wrote:
> > Tracing programs can derive struct sock pointers from a variety
> > of sources, e.g. a bpf_iter for sk_storage maps receives one as
> > part of the context. It's desirable to be able to pass these to
> > functions that expect PTR_TO_SOCKET. For example, it enables us
> > to insert such a socket into a sockmap via map_elem_update.
> >
> > Teach the verifier that a PTR_TO_BTF_ID for a struct sock is
> > equivalent to PTR_TO_SOCKET. There is one hazard here:
> > bpf_sk_release also takes a PTR_TO_SOCKET, but expects it to be
> > refcounted. Since this isn't the case for pointers derived from
> > BTF we must prevent them from being passed to the function.
> > Luckily, we can simply check that the ref_obj_id is not zero
> > in release_reference, and return an error otherwise.
> >
> > Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> > ---
> >  kernel/bpf/verifier.c | 61 +++++++++++++++++++++++++------------------
> >  1 file changed, 36 insertions(+), 25 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index b4e9c56b8b32..509754c3aa7d 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -3908,6 +3908,9 @@ static int resolve_map_arg_type(struct bpf_verifier_env *env,
> >       return 0;
> >  }
> >
> > +BTF_ID_LIST(btf_fullsock_ids)
> > +BTF_ID(struct, sock)
> It may be fine for the sockmap iter case to treat the "struct sock" BTF_ID
> as a fullsock (i.e. PTR_TO_SOCKET).

I think it's unsafe even for the sockmap iter. Since it's a tracing
prog there might
be other ways for it to obtain a struct sock * in the future.

> This is a generic verifier change though.  For tracing, it is not always the
> case.  It cannot always assume that the "struct sock *" in the function being
> traced is always a fullsock.

Yes, I see, thanks for reminding me. What a footgun. I think the
problem boils down
to the fact that we can't express "this is a full socket" in BTF,
since there is no such
type in the kernel.

Which makes me wonder: how do tracing programs deal with struct sock*
that really
is a request sock or something?

> Currently, the func_proto taking ARG_PTR_TO_SOCKET can safely
> assume it must be a fullsock.  If it is allowed to also take BTF_ID
> "struct sock" in verification time,  I think the sk_fullsock()
> check in runtime is needed and this check should be pretty
> cheap to do.

Can you elaborate a little? Where do you think the check could happen?

I could change the patch to treat struct sock * as PTR_TO_SOCK_COMMON,
and adjust the sockmap helpers accordingly. The implication is that
over time, helpers will migrate to PTR_TO_SOCK_COMMON because that is
compatible with BTF. PTR_TO_SOCKET will become unused except to
maintain the ABI for access to struct bpf_sock. Maybe that's OK
though?

> > +
> >  static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> >                         struct bpf_call_arg_meta *meta,
> >                         const struct bpf_func_proto *fn)
> > @@ -4000,37 +4003,15 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> >               expected_type = PTR_TO_SOCKET;
> >               if (!(register_is_null(reg) &&
> >                     arg_type == ARG_PTR_TO_SOCKET_OR_NULL)) {
> > -                     if (type != expected_type)
> > +                     if (type != expected_type &&
> > +                         type != PTR_TO_BTF_ID)
> >                               goto err_type;
> >               }
> > +             meta->btf_id = btf_fullsock_ids[0];
> >       } else if (arg_type == ARG_PTR_TO_BTF_ID) {
> > -             bool ids_match = false;
> > -
> >               expected_type = PTR_TO_BTF_ID;
> >               if (type != expected_type)
> >                       goto err_type;
> > -             if (!fn->check_btf_id) {
> > -                     if (reg->btf_id != meta->btf_id) {
> > -                             ids_match = btf_struct_ids_match(&env->log, reg->off, reg->btf_id,
> > -                                                              meta->btf_id);
> > -                             if (!ids_match) {
> > -                                     verbose(env, "Helper has type %s got %s in R%d\n",
> > -                                             kernel_type_name(meta->btf_id),
> > -                                             kernel_type_name(reg->btf_id), regno);
> > -                                     return -EACCES;
> > -                             }
> > -                     }
> > -             } else if (!fn->check_btf_id(reg->btf_id, arg)) {
> > -                     verbose(env, "Helper does not support %s in R%d\n",
> > -                             kernel_type_name(reg->btf_id), regno);
> > -
> > -                     return -EACCES;
> > -             }
> > -             if ((reg->off && !ids_match) || !tnum_is_const(reg->var_off) || reg->var_off.value) {
> > -                     verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n",
> > -                             regno);
> > -                     return -EACCES;
> > -             }
> >       } else if (arg_type == ARG_PTR_TO_SPIN_LOCK) {
> >               if (meta->func_id == BPF_FUNC_spin_lock) {
> >                       if (process_spin_lock(env, regno, true))
> > @@ -4085,6 +4066,33 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> >               return -EFAULT;
> >       }
> >
> > +     if (type == PTR_TO_BTF_ID) {
> > +             bool ids_match = false;
> > +
> > +             if (!fn->check_btf_id) {
> > +                     if (reg->btf_id != meta->btf_id) {
> > +                             ids_match = btf_struct_ids_match(&env->log, reg->off, reg->btf_id,
> > +                                                              meta->btf_id);
> > +                             if (!ids_match) {
> > +                                     verbose(env, "Helper has type %s got %s in R%d\n",
> > +                                             kernel_type_name(meta->btf_id),
> > +                                             kernel_type_name(reg->btf_id), regno);
> > +                                     return -EACCES;
> > +                             }
> > +                     }
> > +             } else if (!fn->check_btf_id(reg->btf_id, arg)) {
> > +                     verbose(env, "Helper does not support %s in R%d\n",
> > +                             kernel_type_name(reg->btf_id), regno);
> > +
> > +                     return -EACCES;
> > +             }
> > +             if ((reg->off && !ids_match) || !tnum_is_const(reg->var_off) || reg->var_off.value) {
> > +                     verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n",
> > +                             regno);
> > +                     return -EACCES;
> > +             }
> > +     }
> > +
> >       if (arg_type == ARG_CONST_MAP_PTR) {
> >               /* bpf_map_xxx(map_ptr) call: remember that map_ptr */
> >               meta->map_ptr = reg->map_ptr;
> > @@ -4561,6 +4569,9 @@ static int release_reference(struct bpf_verifier_env *env,
> >       int err;
> >       int i;
> >
> > +     if (!ref_obj_id)
> > +             return -EINVAL;
> hmm...... Is it sure this is needed?  If it was, it seems there was
> an existing bug in release_reference_state() below which could not catch
> the case where "bpf_sk_release()" is called on a pointer that has no
> reference acquired before.

Since sk_release takes a PTR_TO_SOCKET, it's possible to pass a tracing
struct sock * to it after this patch. Adding this check prevents the
release from
succeeding.

>
> Can you write a verifier test to demonstrate the issue?

There is a selftest in this series that ensures calling sk_release
doesn't work, which exercises this.

>
> > +
> >       err = release_reference_state(cur_func(env), ref_obj_id);
> >       if (err)
> >               return err;
> > --
> > 2.25.1
> >



-- 
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v3 1/6] bpf: Allow passing BTF pointers as PTR_TO_SOCKET
  2020-09-07  8:57     ` Lorenz Bauer
@ 2020-09-08 19:52       ` Martin KaFai Lau
  2020-09-09  9:16         ` Lorenz Bauer
  0 siblings, 1 reply; 13+ messages in thread
From: Martin KaFai Lau @ 2020-09-08 19:52 UTC (permalink / raw)
  To: Lorenz Bauer
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Jakub Sitnicki, John Fastabend, bpf, kernel-team

On Mon, Sep 07, 2020 at 09:57:06AM +0100, Lorenz Bauer wrote:
> On Sun, 6 Sep 2020 at 23:40, Martin KaFai Lau <kafai@fb.com> wrote:
> >
> > On Fri, Sep 04, 2020 at 10:58:59AM +0100, Lorenz Bauer wrote:
> > > Tracing programs can derive struct sock pointers from a variety
> > > of sources, e.g. a bpf_iter for sk_storage maps receives one as
> > > part of the context. It's desirable to be able to pass these to
> > > functions that expect PTR_TO_SOCKET. For example, it enables us
> > > to insert such a socket into a sockmap via map_elem_update.
> > >
> > > Teach the verifier that a PTR_TO_BTF_ID for a struct sock is
> > > equivalent to PTR_TO_SOCKET. There is one hazard here:
> > > bpf_sk_release also takes a PTR_TO_SOCKET, but expects it to be
> > > refcounted. Since this isn't the case for pointers derived from
> > > BTF we must prevent them from being passed to the function.
> > > Luckily, we can simply check that the ref_obj_id is not zero
> > > in release_reference, and return an error otherwise.
> > >
> > > Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> > > ---
> > >  kernel/bpf/verifier.c | 61 +++++++++++++++++++++++++------------------
> > >  1 file changed, 36 insertions(+), 25 deletions(-)
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index b4e9c56b8b32..509754c3aa7d 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -3908,6 +3908,9 @@ static int resolve_map_arg_type(struct bpf_verifier_env *env,
> > >       return 0;
> > >  }
> > >
> > > +BTF_ID_LIST(btf_fullsock_ids)
> > > +BTF_ID(struct, sock)
> > It may be fine for the sockmap iter case to treat the "struct sock" BTF_ID
> > as a fullsock (i.e. PTR_TO_SOCKET).
> 
> I think it's unsafe even for the sockmap iter. Since it's a tracing
> prog there might
> be other ways for it to obtain a struct sock * in the future.
> 
> > This is a generic verifier change though.  For tracing, it is not always the
> > case.  It cannot always assume that the "struct sock *" in the function being
> > traced is always a fullsock.
> 
> Yes, I see, thanks for reminding me. What a footgun. I think the
> problem boils down
> to the fact that we can't express "this is a full socket" in BTF,
> since there is no such
> type in the kernel.
> 
> Which makes me wonder: how do tracing programs deal with struct sock*
> that really
> is a request sock or something?
PTR_TO_BTF_ID is handled differently, by BPF_PROBE_MEM, to take care
of cases like this.  bpf_jit_comp.c has some more details.

[ ... ]

> > > @@ -4561,6 +4569,9 @@ static int release_reference(struct bpf_verifier_env *env,
> > >       int err;
> > >       int i;
> > >
> > > +     if (!ref_obj_id)
> > > +             return -EINVAL;
> > hmm...... Is it sure this is needed?  If it was, it seems there was
> > an existing bug in release_reference_state() below which could not catch
> > the case where "bpf_sk_release()" is called on a pointer that has no
> > reference acquired before.
> 
> Since sk_release takes a PTR_TO_SOCKET, it's possible to pass a tracing
> struct sock * to it after this patch. Adding this check prevents the
> release from
> succeeding.
Not all existing PTR_TO_SOCK_COMMON takes a reference also.
Does it mean all these existing cases are broken?
For example, bpf_sk_release(__sk_buff->sk) is allowed now?

> 
> >
> > Can you write a verifier test to demonstrate the issue?
> 
> There is a selftest in this series that ensures calling sk_release
> doesn't work, which exercises this.b
I am not sure what Patch 4 of this series is testing.
bpf_sk_release is not even available in bpf tracing iter program.

There are ref tracking tests in tools/testing/selftests/bpf/verifier/ref_tracking.c.
Please add all ref count related test there to catch the issue.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v3 1/6] bpf: Allow passing BTF pointers as PTR_TO_SOCKET
  2020-09-08 19:52       ` Martin KaFai Lau
@ 2020-09-09  9:16         ` Lorenz Bauer
  2020-09-09 15:42           ` Lorenz Bauer
  0 siblings, 1 reply; 13+ messages in thread
From: Lorenz Bauer @ 2020-09-09  9:16 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Jakub Sitnicki, John Fastabend, bpf, kernel-team

On Tue, 8 Sep 2020 at 20:52, Martin KaFai Lau <kafai@fb.com> wrote:
>
> On Mon, Sep 07, 2020 at 09:57:06AM +0100, Lorenz Bauer wrote:
> > On Sun, 6 Sep 2020 at 23:40, Martin KaFai Lau <kafai@fb.com> wrote:
> > >
> > > On Fri, Sep 04, 2020 at 10:58:59AM +0100, Lorenz Bauer wrote:
> > > > Tracing programs can derive struct sock pointers from a variety
> > > > of sources, e.g. a bpf_iter for sk_storage maps receives one as
> > > > part of the context. It's desirable to be able to pass these to
> > > > functions that expect PTR_TO_SOCKET. For example, it enables us
> > > > to insert such a socket into a sockmap via map_elem_update.
> > > >
> > > > Teach the verifier that a PTR_TO_BTF_ID for a struct sock is
> > > > equivalent to PTR_TO_SOCKET. There is one hazard here:
> > > > bpf_sk_release also takes a PTR_TO_SOCKET, but expects it to be
> > > > refcounted. Since this isn't the case for pointers derived from
> > > > BTF we must prevent them from being passed to the function.
> > > > Luckily, we can simply check that the ref_obj_id is not zero
> > > > in release_reference, and return an error otherwise.
> > > >
> > > > Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> > > > ---
> > > >  kernel/bpf/verifier.c | 61 +++++++++++++++++++++++++------------------
> > > >  1 file changed, 36 insertions(+), 25 deletions(-)
> > > >
> > > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > > index b4e9c56b8b32..509754c3aa7d 100644
> > > > --- a/kernel/bpf/verifier.c
> > > > +++ b/kernel/bpf/verifier.c
> > > > @@ -3908,6 +3908,9 @@ static int resolve_map_arg_type(struct bpf_verifier_env *env,
> > > >       return 0;
> > > >  }
> > > >
> > > > +BTF_ID_LIST(btf_fullsock_ids)
> > > > +BTF_ID(struct, sock)
> > > It may be fine for the sockmap iter case to treat the "struct sock" BTF_ID
> > > as a fullsock (i.e. PTR_TO_SOCKET).
> >
> > I think it's unsafe even for the sockmap iter. Since it's a tracing
> > prog there might
> > be other ways for it to obtain a struct sock * in the future.
> >
> > > This is a generic verifier change though.  For tracing, it is not always the
> > > case.  It cannot always assume that the "struct sock *" in the function being
> > > traced is always a fullsock.
> >
> > Yes, I see, thanks for reminding me. What a footgun. I think the
> > problem boils down
> > to the fact that we can't express "this is a full socket" in BTF,
> > since there is no such
> > type in the kernel.
> >
> > Which makes me wonder: how do tracing programs deal with struct sock*
> > that really
> > is a request sock or something?
> PTR_TO_BTF_ID is handled differently, by BPF_PROBE_MEM, to take care
> of cases like this.  bpf_jit_comp.c has some more details.

Thanks, that helps a lot. I also dug into the BTF pointer patchset as
well, and now your comment about PTR_TO_BTF_ID being NULL makes sense
as well. Sigh, I should've looked at this from the start.

What I still don't understand is how PTR_TO_BTF_ID is safe for a
struct sock* that points at a smaller reqsk for example. How do we
prevent a valid, non-faulting BPF read from accessing memory beyond
the reqsk?

>
> [ ... ]
>
> > > > @@ -4561,6 +4569,9 @@ static int release_reference(struct bpf_verifier_env *env,
> > > >       int err;
> > > >       int i;
> > > >
> > > > +     if (!ref_obj_id)
> > > > +             return -EINVAL;
> > > hmm...... Is it sure this is needed?  If it was, it seems there was
> > > an existing bug in release_reference_state() below which could not catch
> > > the case where "bpf_sk_release()" is called on a pointer that has no
> > > reference acquired before.
> >
> > Since sk_release takes a PTR_TO_SOCKET, it's possible to pass a tracing
> > struct sock * to it after this patch. Adding this check prevents the
> > release from
> > succeeding.
> Not all existing PTR_TO_SOCK_COMMON takes a reference also.
> Does it mean all these existing cases are broken?
> For example, bpf_sk_release(__sk_buff->sk) is allowed now?

I'll look into this. It's very possible I got the refcounting logic
wrong, again.

>
> >
> > >
> > > Can you write a verifier test to demonstrate the issue?
> >
> > There is a selftest in this series that ensures calling sk_release
> > doesn't work, which exercises this.b
> I am not sure what Patch 4 of this series is testing.
> bpf_sk_release is not even available in bpf tracing iter program.

I built a patched kernel where sk_release is available, and verified
the behaviour that way. My idea was that as long as the test fails
we've proven that releasing the sk is not possible. I realize this is
counterintuitive and kind of brittle. Maybe your point about
__sk_buff->sk will allow me to write a better test.

>
> There are ref tracking tests in tools/testing/selftests/bpf/verifier/ref_tracking.c.
> Please add all ref count related test there to catch the issue.

Ack.


-- 
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v3 1/6] bpf: Allow passing BTF pointers as PTR_TO_SOCKET
  2020-09-09  9:16         ` Lorenz Bauer
@ 2020-09-09 15:42           ` Lorenz Bauer
  2020-09-09 16:40             ` Martin KaFai Lau
  0 siblings, 1 reply; 13+ messages in thread
From: Lorenz Bauer @ 2020-09-09 15:42 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Jakub Sitnicki, John Fastabend, bpf, kernel-team

On Wed, 9 Sep 2020 at 10:16, Lorenz Bauer <lmb@cloudflare.com> wrote:
>
> On Tue, 8 Sep 2020 at 20:52, Martin KaFai Lau <kafai@fb.com> wrote:

[...]

> > Not all existing PTR_TO_SOCK_COMMON takes a reference also.
> > Does it mean all these existing cases are broken?
> > For example, bpf_sk_release(__sk_buff->sk) is allowed now?
>
> I'll look into this. It's very possible I got the refcounting logic
> wrong, again.

bpf_sk_release(__sk_buff->sk) is fine, and there is a test from Martin
in verifier/sock.c that exercises this. The case I was worried about
can't happen since release_reference_state returns EINVAL if it can't
find a reference for the given ref_obj_id. Since we never allocate a
reference with id 0 this ends up as the same thing, just less explicit
than checking for id == 0.

-- 
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v3 1/6] bpf: Allow passing BTF pointers as PTR_TO_SOCKET
  2020-09-09 15:42           ` Lorenz Bauer
@ 2020-09-09 16:40             ` Martin KaFai Lau
  0 siblings, 0 replies; 13+ messages in thread
From: Martin KaFai Lau @ 2020-09-09 16:40 UTC (permalink / raw)
  To: Lorenz Bauer
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Jakub Sitnicki, John Fastabend, bpf, kernel-team

On Wed, Sep 09, 2020 at 04:42:57PM +0100, Lorenz Bauer wrote:
> On Wed, 9 Sep 2020 at 10:16, Lorenz Bauer <lmb@cloudflare.com> wrote:
> >
> > On Tue, 8 Sep 2020 at 20:52, Martin KaFai Lau <kafai@fb.com> wrote:
> 
> [...]
> 
> > > Not all existing PTR_TO_SOCK_COMMON takes a reference also.
> > > Does it mean all these existing cases are broken?
> > > For example, bpf_sk_release(__sk_buff->sk) is allowed now?
> >
> > I'll look into this. It's very possible I got the refcounting logic
> > wrong, again.
> 
> bpf_sk_release(__sk_buff->sk) is fine, and there is a test from Martin
> in verifier/sock.c that exercises this. The case I was worried about
> can't happen since release_reference_state returns EINVAL if it can't
> find a reference for the given ref_obj_id. Since we never allocate a
> reference with id 0 this ends up as the same thing, just less explicit
> than checking for id == 0.
Thanks for checking!

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-09-09 16:41 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-04  9:58 [PATCH bpf-next v3 0/6] Sockmap iterator Lorenz Bauer
2020-09-04  9:58 ` [PATCH bpf-next v3 1/6] bpf: Allow passing BTF pointers as PTR_TO_SOCKET Lorenz Bauer
2020-09-06 22:40   ` Martin KaFai Lau
2020-09-07  8:57     ` Lorenz Bauer
2020-09-08 19:52       ` Martin KaFai Lau
2020-09-09  9:16         ` Lorenz Bauer
2020-09-09 15:42           ` Lorenz Bauer
2020-09-09 16:40             ` Martin KaFai Lau
2020-09-04  9:59 ` [PATCH bpf-next v3 2/6] net: sockmap: Remove unnecessary sk_fullsock checks Lorenz Bauer
2020-09-04  9:59 ` [PATCH bpf-next v3 3/6] net: Allow iterating sockmap and sockhash Lorenz Bauer
2020-09-04  9:59 ` [PATCH bpf-next v3 4/6] selftests: bpf: Ensure that BTF sockets cannot be released Lorenz Bauer
2020-09-04  9:59 ` [PATCH bpf-next v3 5/6] selftests: bpf: Add helper to compare socket cookies Lorenz Bauer
2020-09-04  9:59 ` [PATCH bpf-next v3 6/6] selftests: bpf: Test copying a sockmap via bpf_iter Lorenz Bauer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).